threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Centos 5.X kernel 2.6.18-274\npgsql-9.1 from pgdg-91-centos.repo\nrelatively small database 3.2Gb\nLot of insert, update, delete.\n\nI see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller.\nWhat I'm doing wrong, and is it possible somehow to fix?\n\nThanks in advance.\n\nAndrew.\n\n# top -d 10.00 -b -n 2 -U postgres -c\n\ntop - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42\nTasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie\nCpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\nCpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\nCpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st\nCpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st\nCpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st\nCpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st\nMem: 16426540k total, 16356772k used, 69768k free, 215764k buffers\nSwap: 4194232k total, 145280k used, 4048952k free, 14434356k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 6513 postgres 16 0 4329m 235m 225m S 3.1 1.5 0:02.24 postgres: XXXX_DB [local] idle\n 6891 postgres 16 0 4331m 223m 213m S 1.7 1.4 0:01.44 postgres: XXXX_DB [local] idle\n 6829 postgres 16 0 4329m 219m 210m S 1.6 1.4 0:01.56 postgres: XXXX_DB [local] idle\n 6539 postgres 16 0 4330m 319m 308m S 1.5 2.0 0:03.64 postgres: XXXX_DB [local] idle\n 6487 postgres 16 0 4329m 234m 224m S 1.2 1.5 0:02.95 postgres: XXXX_DB [local] idle\n 6818 postgres 16 0 4328m 224m 215m S 1.2 1.4 0:02.00 postgres: XXXX_DB [local] idle\n 6831 postgres 16 0 4328m 215m 206m S 1.2 1.3 0:01.41 postgres: XXXX_DB [local] idle\n 6868 postgres 16 0 4330m 223m 213m S 1.2 1.4 0:01.46 postgres: XXXX_DB [local] idle\n 6899 postgres 15 0 4328m 220m 211m S 1.2 1.4 0:01.61 postgres: XXXX_DB [local] idle\n 6515 postgres 15 0 4331m 233m 223m S 1.0 1.5 0:02.66 postgres: XXXX_DB [local] idle\n 6890 postgres 16 0 4331m 279m 268m S 1.0 1.7 0:02.01 postgres: XXXX_DB [local] idle\n 7083 postgres 15 0 4328m 207m 199m S 1.0 1.3 0:00.77 postgres: XXXX_DB [local] idle\n 6374 postgres 16 0 4329m 245m 235m S 0.9 1.5 0:04.30 postgres: XXXX_DB [local] idle\n 6481 postgres 15 0 4328m 293m 285m S 0.9 1.8 0:03.17 postgres: XXXX_DB [local] idle\n 6484 postgres 16 0 4329m 236m 226m S 0.9 1.5 0:02.82 postgres: XXXX_DB [local] idle\n 6509 postgres 16 0 4332m 237m 225m S 0.9 1.5 0:02.90 postgres: XXXX_DB [local] idle\n 6522 postgres 15 0 4330m 238m 228m S 0.9 1.5 0:02.35 postgres: XXXX_DB [local] idle\n 6812 postgres 16 0 4329m 283m 274m S 0.9 1.8 0:02.19 postgres: XXXX_DB [local] idle\n 7086 postgres 15 0 4328m 202m 194m S 0.9 1.3 0:00.70 postgres: XXXX_DB [local] idle\n 6494 postgres 15 0 4329m 317m 306m S 0.8 2.0 0:03.98 postgres: XXXX_DB [local] idle\n 6542 postgres 16 0 4330m 309m 299m S 0.8 1.9 0:02.79 postgres: XXXX_DB [local] idle\n 6550 postgres 15 0 4329m 287m 277m S 0.8 1.8 0:02.80 postgres: XXXX_DB [local] idle\n 6777 postgres 16 0 4329m 229m 219m S 0.8 1.4 0:02.13 postgres: XXXX_DB [local] idle\n 6816 postgres 16 0 4329m 230m 220m S 0.8 1.4 0:01.61 postgres: XXXX_DB [local] idle\n 6822 postgres 15 0 4329m 305m 295m S 0.8 1.9 0:02.09 postgres: XXXX_DB [local] idle\n 6897 postgres 15 0 4328m 219m 210m S 0.8 1.4 0:01.69 postgres: XXXX_DB [local] idle\n 6926 postgres 16 0 4328m 209m 200m S 0.8 1.3 0:00.81 postgres: XXXX_DB [local] idle\n 6473 postgres 16 0 4329m 236m 226m S 0.7 1.5 0:02.81 postgres: XXXX_DB [local] idle\n 6826 postgres 16 0 4330m 226m 216m S 0.7 1.4 0:02.14 postgres: XXXX_DB [local] idle\n 6834 postgres 16 0 4331m 282m 271m S 0.7 1.8 0:03.06 postgres: XXXX_DB [local] idle\n 6882 postgres 15 0 4330m 222m 212m S 0.7 1.4 0:01.83 postgres: XXXX_DB [local] idle\n 6885 postgres 16 0 4328m 104m 96m S 0.6 0.7 0:00.94 postgres: XXXX_DB [local] idle\n 6878 postgres 15 0 4319m 2992 1472 S 0.4 0.0 40:20.10 postgres: wal sender process postgres 555.555.555.555(47880) streaming 21B/2BFE82F8\n 6519 postgres 16 0 4330m 249m 240m S 0.3 1.6 0:03.14 postgres: XXXX_DB [local] idle\n 6477 postgres 16 0 4331m 239m 228m S 0.2 1.5 0:02.75 postgres: XXXX_DB [local] idle\n 6500 postgres 16 0 4328m 227m 219m S 0.2 1.4 0:01.84 postgres: XXXX_DB [local] idle\n 6576 postgres 16 0 4331m 289m 278m S 0.2 1.8 0:03.01 postgres: XXXX_DB [local] idle\n 6637 postgres 16 0 4330m 230m 220m S 0.2 1.4 0:02.13 postgres: XXXX_DB [local] idle\n 6773 postgres 16 0 4330m 225m 214m S 0.2 1.4 0:02.98 postgres: XXXX_DB [local] idle\n 6838 postgres 16 0 4329m 224m 215m S 0.2 1.4 0:01.30 postgres: XXXX_DB [local] idle\n 7283 postgres 16 0 4326m 24m 18m S 0.2 0.2 0:00.08 postgres: XXXX_DB [local] idle\n 6378 postgres 16 0 4329m 267m 258m S 0.1 1.7 0:03.74 postgres: XXXX_DB [local] idle\n 6439 postgres 15 0 4330m 256m 244m S 0.1 1.6 0:03.62 postgres: XXXX_DB [local] idle\n 6535 postgres 15 0 4330m 289m 279m S 0.1 1.8 0:03.14 postgres: XXXX_DB [local] idle\n 6538 postgres 15 0 4330m 231m 221m S 0.1 1.4 0:02.17 postgres: XXXX_DB [local] idle\n 6544 postgres 15 0 4329m 226m 216m S 0.1 1.4 0:01.86 postgres: XXXX_DB [local] idle\n 6546 postgres 15 0 4329m 229m 219m S 0.1 1.4 0:02.40 postgres: XXXX_DB [local] idle\n 6552 postgres 16 0 4330m 246m 236m S 0.1 1.5 0:02.49 postgres: XXXX_DB [local] idle\n 6555 postgres 15 0 4328m 226m 217m S 0.1 1.4 0:02.05 postgres: XXXX_DB [local] idle\n 6558 postgres 16 0 4329m 233m 223m S 0.1 1.5 0:02.59 postgres: XXXX_DB [local] idle\n 6572 postgres 16 0 4328m 227m 218m S 0.1 1.4 0:01.69 postgres: XXXX_DB [local] idle\n 6580 postgres 16 0 4329m 229m 220m S 0.1 1.4 0:02.34 postgres: XXXX_DB [local] idle\n 6724 postgres 16 0 4331m 231m 220m S 0.1 1.4 0:01.80 postgres: XXXX_DB [local] idle\n 6804 postgres 16 0 4328m 115m 106m S 0.1 0.7 0:01.48 postgres: XXXX_DB [local] idle\n 6811 postgres 15 0 4329m 223m 214m S 0.1 1.4 0:01.51 postgres: XXXX_DB [local] idle\n 6821 postgres 16 0 4331m 306m 295m S 0.1 1.9 0:02.19 postgres: XXXX_DB [local] idle\n 6836 postgres 16 0 4329m 226m 216m S 0.1 1.4 0:01.72 postgres: XXXX_DB [local] idle\n 6879 postgres 16 0 4330m 222m 212m S 0.1 1.4 0:01.84 postgres: XXXX_DB [local] idle\n 6888 postgres 16 0 4328m 216m 208m S 0.1 1.4 0:01.32 postgres: XXXX_DB [local] idle\n 6896 postgres 16 0 4328m 213m 206m S 0.1 1.3 0:01.07 postgres: XXXX_DB [local] idle\n14999 postgres 15 0 115m 1840 808 S 0.1 0.0 29:59.16 postgres: stats collector process\n 830 postgres 15 0 4319m 8396 6420 S 0.0 0.1 0:00.06 postgres: XXXX_DB 192.168.0.1(42974) idle\n 6808 postgres 15 0 4328m 222m 214m S 0.0 1.4 0:01.80 postgres: XXXX_DB [local] idle\n 6873 postgres 15 0 4329m 222m 213m S 0.0 1.4 0:01.92 postgres: XXXX_DB [local] idle\n 6875 postgres 16 0 4329m 228m 219m S 0.0 1.4 0:02.46 postgres: XXXX_DB [local] idle\n 6906 postgres 16 0 4328m 216m 208m S 0.0 1.4 0:00.83 postgres: XXXX_DB [local] idle\n 7274 postgres 15 0 4344m 534m 531m S 0.0 3.3 0:00.37 postgres: autovacuum worker process XXXX_DB\n 7818 postgres 15 0 4319m 6640 4680 S 0.0 0.0 0:00.06 postgres: postgres XXXX_DB 193.8.246.6(1032) idle\n10553 postgres 15 0 4319m 6940 5000 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(35402) idle\n10600 postgres 15 0 4319m 6780 4848 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(35612) idle\n11146 postgres 15 0 4319m 7692 5744 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(39366) idle\n12291 postgres 15 0 4319m 6716 4784 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(49540) idle\n12711 postgres 15 0 4319m 8048 5984 S 0.0 0.0 0:00.02 postgres: XXXX_DB 192.168.0.1(51440) idle\n12717 postgres 15 0 4319m 6768 4836 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(51616) idle\n12815 postgres 15 0 4319m 6540 4608 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(52989) idle\n13140 postgres 15 0 4319m 7736 5660 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(55225) idle\n14378 postgres 15 0 4320m 7324 4928 S 0.0 0.0 0:00.03 postgres: postgres postgres 222.222.222.222(1030) idle\n14979 postgres 15 0 4316m 104m 103m S 0.0 0.6 6:46.80 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data\n14981 postgres 15 0 112m 1368 728 S 0.0 0.0 0:00.06 postgres: logger process\n14995 postgres 15 0 4320m 2.0g 2.0g S 0.0 12.7 4:44.31 postgres: writer process\n14996 postgres 15 0 4318m 17m 16m S 0.0 0.1 0:12.76 postgres: wal writer process\n14997 postgres 15 0 4319m 3312 1568 S 0.0 0.0 0:10.14 postgres: autovacuum launcher process\n14998 postgres 15 0 114m 1444 756 S 0.0 0.0 0:13.06 postgres: archiver process last was 000000010000021B0000002A\n15027 postgres 15 0 4319m 80m 77m S 0.0 0.5 31:35.48 postgres: monitor XXXX_DB 10.0.0.0 (55433) idle\n15070 postgres 15 0 4319m 82m 80m S 0.0 0.5 28:39.80 postgres: monitor XXXX_DB 10.10.0.1 (59360) idle\n15808 postgres 15 0 4324m 15m 10m S 0.0 0.1 0:00.27 postgres: postgres XXXX_DB 222.222.222.222 (1031) idle\n18787 postgres 15 0 4319m 8004 5932 S 0.0 0.0 0:00.02 postgres: XXXX_DB 192.168.0.1(46831) idle\n18850 postgres 15 0 4319m 7364 5304 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(48843) idle\n20331 postgres 15 0 4319m 6592 4660 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(60573) idle\n26950 postgres 15 0 4319m 8172 6136 S 0.0 0.0 0:00.03 postgres: XXXX_DB 192.168.0.1(47890) idle\n27599 postgres 15 0 4319m 8220 6200 S 0.0 0.1 0:00.04 postgres: XXXX_DB 192.168.0.1(49566) idle\n28039 postgres 15 0 4319m 6644 4696 S 0.0 0.0 0:00.00 postgres: XXXX_DB 192.168.0.1(38329) idle\n30450 postgres 15 0 4319m 8412 6316 S 0.0 0.1 0:00.03 postgres: XXXX_DB 192.168.0.1(49490) idle\n31327 postgres 15 0 4319m 8508 6412 S 0.0 0.1 0:00.03 postgres: XXXX_DB 192.168.0.1(57064) idle\n31363 postgres 15 0 4319m 8428 6364 S 0.0 0.1 0:00.03 postgres: XXXX_DB 192.168.0.1(58128) idle\n32624 postgres 15 0 4319m 7356 5340 S 0.0 0.0 0:00.01 postgres: XXXX_DB 192.168.0.1(38002) idle\n32651 postgres 15 0 4319m 8540 6572 S 0.0 0.1 0:00.07 postgres: XXXX_DB 192.168.0.1(38544) idle\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 04 Jan 2013 03:45:08 +0400",
"msg_from": "=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?U01QIG9uIGEgaGVhdnkgbG9hZGVkIGRhdGFiYXNl?="
},
{
"msg_contents": "On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere <[email protected]> wrote:\n> Centos 5.X kernel 2.6.18-274\n> pgsql-9.1 from pgdg-91-centos.repo\n> relatively small database 3.2Gb\n> Lot of insert, update, delete.\n>\n> I see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller.\n> What I'm doing wrong, and is it possible somehow to fix?\n>\n> Thanks in advance.\n>\n> Andrew.\n>\n> # top -d 10.00 -b -n 2 -U postgres -c\n>\n> top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42\n> Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie\n> Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n> Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n> Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n> Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n> Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\n> Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n> Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st\n> Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\n> Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st\n> Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n> Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n> Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st\n> Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n> Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n> Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st\n> Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st\n> Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers\n> Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached\n>\n\nSo how many concurrent users are accessing this db? pgsql assigns one\nprocess on one core so to speak. It can't spread load for one user\nover all cores.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 4 Jan 2013 00:42:47 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SMP on a heavy loaded database"
},
{
"msg_contents": "Пятница, 4 января 2013, 0:42 -07:00 от Scott Marlowe <[email protected]>:\n>On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere < [email protected] > wrote:\n>> Centos 5.X kernel 2.6.18-274\n>> pgsql-9.1 from pgdg-91-centos.repo\n>> relatively small database 3.2Gb\n>> Lot of insert, update, delete.\n>>\n>> I see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller.\n>> What I'm doing wrong, and is it possible somehow to fix?\n>>\n>> Thanks in advance.\n>>\n>> Andrew.\n>>\n>> # top -d 10.00 -b -n 2 -U postgres -c\n>>\n>> top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42\n>> Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie\n>> Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n>> Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n>> Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n>> Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n>> Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\n>> Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n>> Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st\n>> Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\n>> Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st\n>> Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n>> Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n>> Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st\n>> Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n>> Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n>> Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st\n>> Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st\n>> Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers\n>> Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached\n>>\n>\n>So how many concurrent users are accessing this db? pgsql assigns one\n>process on one core so to speak. It can't spread load for one user\n>over all cores.\n64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp\n\nПятница, 4 января 2013, 0:42 -07:00 от Scott Marlowe <[email protected]>:\n\n\n\n\n\nOn Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere <[email protected]> wrote:> Centos 5.X kernel 2.6.18-274> pgsql-9.1 from pgdg-91-centos.repo> relatively small database 3.2Gb> Lot of insert, update, delete.>> I see non balanced _User_ usage on 14 CPU, exclusively assigned to the hardware raid controller.> What I'm doing wrong, and is it possible somehow to fix?>> Thanks in advance.>> Andrew.>> # top -d 10.00 -b -n 2 -U postgres -c>> top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42> Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie> Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st> Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st> Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st> Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st> Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st> Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st> Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st> Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st> Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers> Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached>So how many concurrent users are accessing this db? pgsql assigns oneprocess on one core so to speak. It can't spread load for one userover all cores.\n\n\n\n\n\n64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp",
"msg_date": "Fri, 04 Jan 2013 18:41:25 +0400",
"msg_from": "=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmVbMl06IFtQRVJGT1JNXSBTTVAgb24gYSBoZWF2eSBsb2FkZWQgZGF0YWJh?=\n\t=?UTF-8?B?c2U=?="
},
{
"msg_contents": "________________________________\n> From: [email protected] \n> To: [email protected] \n> Subject: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database \n> Date: Fri, 4 Jan 2013 18:41:25 +0400 \n> \n> \n> \n> \n> Пятница, 4 января 2013, 0:42 -07:00 от Scott Marlowe \n> <[email protected]>: \n> On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere \n> <[email protected]<http://sentmsg?compose&To=devnull%40mail.ua>> wrote: \n> > Centos 5.X kernel 2.6.18-274 \n> > pgsql-9.1 from pgdg-91-centos.repo \n> > relatively small database 3.2Gb \n> > Lot of insert, update, delete. \n> > \n> > I see non balanced _User_ usage on 14 CPU, exclusively assigned to \n> the hardware raid controller. \n> > What I'm doing wrong, and is it possible somehow to fix? \n> > \n> > Thanks in advance. \n> > \n> > Andrew. \n> > \n> > # top -d 10.00 -b -n 2 -U postgres -c \n> > \n> > top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42 \n> > Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie \n> > Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n> > Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n> > Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n> > Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n> > Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st \n> > Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n> > Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st \n> > Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st \n> > Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st \n> > Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n> > Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n> > Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st \n> > Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n> > Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n> > Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st \n> > Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st \n> > Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers \n> > Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached \n> > \n> \n> So how many concurrent users are accessing this db? pgsql assigns one \n> process on one core so to speak. It can't spread load for one user \n> over all cores. \n> 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp\n\nAre you running IRQ Balance ? The OS can pin process to the respective IRQ handler. \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 4 Jan 2013 09:47:51 -0500",
"msg_from": "Charles Gomes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re[2]: [PERFORM] SMP on a heavy loaded database"
},
{
"msg_contents": "On Fri, Jan 4, 2013 at 11:41 AM, nobody nowhere <[email protected]> wrote:\n> So how many concurrent users are accessing this db? pgsql assigns one\n> process on one core so to speak. It can't spread load for one user\n> over all cores.\n>\n> 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp\n\nI guess that means the server isn't dedicated to postgres...\n\n...have you checked which PID is using that core? Is it postgres-related?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 4 Jan 2013 11:52:10 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re[2]: [PERFORM] SMP on a heavy loaded database"
},
{
"msg_contents": "Пятница, 4 января 2013, 9:47 -05:00 от Charles Gomes <[email protected]>:\n>________________________________\n>> From: [email protected]\n>> To: [email protected]\n>> Subject: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database \n>> Date: Fri, 4 Jan 2013 18:41:25 +0400 \n>> \n>> \n>> \n>> \n>> Пятница, 4 января 2013, 0:42 -07:00 от Scott Marlowe \n>> < [email protected] >: \n>> On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere \n>> <[email protected]< http://sentmsg?compose&To=devnull%40mail.ua >> wrote: \n>> > Centos 5.X kernel 2.6.18-274 \n>> > pgsql-9.1 from pgdg-91-centos.repo \n>> > relatively small database 3.2Gb \n>> > Lot of insert, update, delete. \n>> > \n>> > I see non balanced _User_ usage on 14 CPU, exclusively assigned to \n>> the hardware raid controller. \n>> > What I'm doing wrong, and is it possible somehow to fix? \n>> > \n>> > Thanks in advance. \n>> > \n>> > Andrew. \n>> > \n>> > # top -d 10.00 -b -n 2 -U postgres -c \n>> > \n>> > top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42 \n>> > Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie \n>> > Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n>> > Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n>> > Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n>> > Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n>> > Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st \n>> > Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n>> > Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st \n>> > Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st \n>> > Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st \n>> > Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n>> > Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n>> > Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st \n>> > Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n>> > Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st \n>> > Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st \n>> > Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st \n>> > Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers \n>> > Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached \n>> > \n>> \n>> So how many concurrent users are accessing this db? pgsql assigns one \n>> process on one core so to speak. It can't spread load for one user \n>> over all cores. \n>> 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp\n>\n>Are you running IRQ Balance ? The OS can pin process to the respective IRQ handler. \t\t \t \t\t I switch off IRQ Balance on any heavy loaded servers and statically assign IRQ's to processors using \necho 000X > /proc/irq/XX/smp_affinity \n\nirqballance do not help to fix it..\n\n\nПятница, 4 января 2013, 9:47 -05:00 от Charles Gomes <[email protected]>:\n________________________________> From: [email protected]> To: [email protected]> Subject: [PERFORM] Re[2]: [PERFORM] SMP on a heavy loaded database > Date: Fri, 4 Jan 2013 18:41:25 +0400 > > > > > Пятница, 4 января 2013, 0:42 -07:00 от Scott Marlowe > <[email protected]>: > On Thu, Jan 3, 2013 at 4:45 PM, nobody nowhere > <[email protected]<http://sentmsg?compose&To=devnull%40mail.ua>> wrote: > > Centos 5.X kernel 2.6.18-274 > > pgsql-9.1 from pgdg-91-centos.repo > > relatively small database 3.2Gb > > Lot of insert, update, delete. > > > > I see non balanced _User_ usage on 14 CPU, exclusively assigned to > the hardware raid controller. > > What I'm doing wrong, and is it possible somehow to fix? > > > > Thanks in advance. > > > > Andrew. > > > > # top -d 10.00 -b -n 2 -U postgres -c > > > > top - 23:18:19 up 453 days, 57 min, 3 users, load average: 0.55, 0.47, 0.42 > > Tasks: 453 total, 1 running, 452 sleeping, 0 stopped, 0 zombie > > Cpu0 : 0.6%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu3 : 1.2%us, 0.1%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu4 : 2.6%us, 0.4%sy, 0.0%ni, 96.8%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st > > Cpu5 : 0.8%us, 0.0%sy, 0.0%ni, 99.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu6 : 5.4%us, 0.2%sy, 0.0%ni, 94.2%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu7 : 3.3%us, 0.4%sy, 0.0%ni, 96.1%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st > > Cpu8 : 1.4%us, 0.3%sy, 0.0%ni, 98.2%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st > > Cpu9 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu10 : 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu11 : 1.6%us, 0.6%sy, 0.0%ni, 97.4%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st > > Cpu12 : 0.5%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu13 : 1.4%us, 0.2%sy, 0.0%ni, 98.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st > > Cpu14 : 24.2%us, 0.8%sy, 0.0%ni, 74.5%id, 0.3%wa, 0.0%hi, 0.2%si, 0.0%st > > Cpu15 : 0.7%us, 0.1%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st > > Mem: 16426540k total, 16356772k used, 69768k free, 215764k buffers > > Swap: 4194232k total, 145280k used, 4048952k free, 14434356k cached > > > > So how many concurrent users are accessing this db? pgsql assigns one > process on one core so to speak. It can't spread load for one user > over all cores. > 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcpAre you running IRQ Balance ? The OS can pin process to the respective IRQ handler. \t\t \t \t\t I switch off IRQ Balance on any heavy loaded servers and statically assign IRQ's to processors using echo 000X > /proc/irq/XX/smp_affinity irqballance do not help to fix it..",
"msg_date": "Fri, 04 Jan 2013 19:56:28 +0400",
"msg_from": "=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmVbMl06IFtQRVJGT1JNXSBSZVsyXTogW1BFUkZPUk1dIFNNUCBvbiBhIGhl?=\n\t=?UTF-8?B?YXZ5IGxvYWRlZCBkYXRhYmFzZQ==?="
},
{
"msg_contents": "Пятница, 4 января 2013, 11:52 -03:00 от Claudio Freire <[email protected]>:\n>On Fri, Jan 4, 2013 at 11:41 AM, nobody nowhere < [email protected] > wrote:\n>> So how many concurrent users are accessing this db? pgsql assigns one\n>> process on one core so to speak. It can't spread load for one user\n>> over all cores.\n>>\n>> 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcp\n>\n>I guess that means the server isn't dedicated to postgres...\n>\n>...have you checked which PID is using that core? Is it postgres-related?\nHow do I know it?\n\nOnly postgres on this server heavely use the Raid controller. PHP comletely in XCache. At night I'll try to change socket to tcp. May be it will help,\n \n\nПятница, 4 января 2013, 11:52 -03:00 от Claudio Freire <[email protected]>:\n\n\n\n\n\nOn Fri, Jan 4, 2013 at 11:41 AM, nobody nowhere <[email protected]> wrote:> So how many concurrent users are accessing this db? pgsql assigns one> process on one core so to speak. It can't spread load for one user> over all cores.>> 64 php Fast-cgi processes over the Unix socket and about 20-30 over tcpI guess that means the server isn't dedicated to postgres......have you checked which PID is using that core? Is it postgres-related?\n\n\n\n\n\nHow do I know it?Only postgres on this server heavely use the Raid controller. PHP comletely in XCache. At night I'll try to change socket to tcp. May be it will help,",
"msg_date": "Fri, 04 Jan 2013 20:23:46 +0400",
"msg_from": "=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmVbMl06IFtQRVJGT1JNXSBSZVsyXTogW1BFUkZPUk1dIFNNUCBvbiBhIGhl?=\n\t=?UTF-8?B?YXZ5IGxvYWRlZCBkYXRhYmFzZQ==?="
},
{
"msg_contents": "On Fri, Jan 4, 2013 at 1:23 PM, nobody nowhere <[email protected]> wrote:\n>\n> ...have you checked which PID is using that core? Is it postgres-related?\n>\n> How do I know it?\n\nAn unfiltered top or ps might give you a clue. You could also try\niotop, php does hit the filesystem (sessions stored in disk), and if\nit's on the same partition as postgres, postgres' fsyncs might cause\nit to flush to disk quite heavily.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 4 Jan 2013 13:43:16 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re[2]: [PERFORM] SMP on a heavy loaded database"
},
{
"msg_contents": "On Fri, Jan 4, 2013 at 3:38 PM, nobody nowhere <[email protected]> wrote:\n>\n> An unfiltered top or ps might give you a clue. You could also try\n>\n> Look at letter on the topic start.\n\nIt's filtered by -u postgres, so you can't see apache there.\n\n> iotop, php does hit the filesystem (sessions stored in disk), and if\n> it's on the same partition as postgres, postgres' fsyncs might cause\n> it to flush to disk quite heavily.\n>\n> The question was \"which PID is using that core?\"\n> Can you using top or iotop certanly answer on this question? I can't.\n\nIf you see some process hogging CPU/IO in a way that's consistent with\nCPU14, then you have a candidate. I don't see much in that iotop,\nexcept the 640k/s writes in pg's writer, which isn't much at all\nunless you have a seriously underpowered/broken system. If all fails,\nyou can look for processes with high accumulated cputime, like the\n\"monitor\" ones there on the first top (though it doesn't say much,\nsince that top is incomplete), or the walsender. Without the ability\nto compare against all other processes, none of that means much - but\nonce you do, you can inspect those processes more closely.\n\nOh... and you can also tell top to show the \"last used processor\". I\nguess I should have said this first ;-)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 4 Jan 2013 16:04:00 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re[2]: [PERFORM] SMP on a heavy loaded database"
},
{
"msg_contents": "\n>Oh... and you can also tell top to show the \"last used processor\". I\n>guess I should have said this first ;-)\n\nEven if do not fix it, I'll know a new feature of top :)\nCertainly sure 14 CPU\n\n\n Total DISK READ: top - 21:54:38 up 453 days, 23:34, 1 user, load average: 0.56, 0.55, 0.48\nTasks: 429 total, 1 running, 428 sleeping, 0 stopped, 0 zombie\nCpu0 : 0.2%us, 0.1%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu2 : 0.1%us, 0.1%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu3 : 0.7%us, 0.1%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu4 : 1.5%us, 0.4%sy, 0.0%ni, 98.0%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st\nCpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu6 : 2.1%us, 0.2%sy, 0.0%ni, 97.4%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st\nCpu7 : 2.4%us, 0.4%sy, 0.0%ni, 97.0%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\nCpu8 : 1.4%us, 0.4%sy, 0.0%ni, 98.1%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st\nCpu9 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu10 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu11 : 1.2%us, 0.5%sy, 0.0%ni, 97.9%id, 0.0%wa, 0.0%hi, 0.5%si, 0.0%st\nCpu12 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu13 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu14 : 20.5%us, 0.9%sy, 0.0%ni, 78.1%id, 0.4%wa, 0.0%hi, 0.1%si, 0.0%st\nCpu15 : 1.2%us, 0.1%sy, 0.0%ni, 98.5%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\nMem: 16426540k total, 16173980k used, 252560k free, 219348k buffers\nSwap: 4194232k total, 147296k used, 4046936k free, 14482096k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND\n 47 root RT -5 0 0 0 S 0.0 0.0 0:34.84 15 [migration/15]\n 48 root 34 19 0 0 0 S 0.0 0.0 0:01.42 15 [ksoftirqd/15]\n 49 root RT -5 0 0 0 S 0.0 0.0 0:00.00 15 [watchdog/15]\n 65 root 10 -5 0 0 0 S 0.0 0.0 0:00.03 15 [events/15]\n 238 root 10 -5 0 0 0 S 0.0 0.0 0:03.76 15 [kblockd/15]\n 406 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 15 [cqueue/15]\n 601 root 15 0 0 0 0 S 0.0 0.0 88:52.30 15 [pdflush]\n 620 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 15 [aio/15]\n 964 root 17 -5 0 0 0 S 0.0 0.0 0:00.00 15 [ata/15]\n 2684 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 15 [kmpathd/15]\n 2914 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 15 [rpciod/15]\n 3270 root 19 -5 0 0 0 S 0.0 0.0 0:00.00 15 [ib_cm/15]\n 5906 rpc 15 0 8072 688 552 S 0.0 0.0 0:00.00 15 portmap\n14979 postgres 15 0 4316m 104m 103m S 0.0 0.6 6:54.39 15 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data\n 44 root RT -5 0 0 0 S 0.0 0.0 0:40.50 14 [migration/14]\n 45 root 34 19 0 0 0 S 0.0 0.0 0:03.51 14 [ksoftirqd/14]\n 46 root RT -5 0 0 0 S 0.0 0.0 0:00.00 14 [watchdog/14]\n 64 root 10 -5 0 0 0 S 0.0 0.0 0:00.04 14 [events/14]\n 237 root 10 -5 0 0 0 S 0.0 0.0 9:51.44 14 [kblockd/14]\n 405 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 14 [cqueue/14]\n 619 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 14 [aio/14]\n 963 root 16 -5 0 0 0 S 0.0 0.0 0:00.00 14 [ata/14]\n 1092 root 10 -5 0 0 0 S 0.0 0.0 52:21.12 14 [kjournald]\n 2683 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 14 [kmpathd/14]\n 2724 root 10 -5 0 0 0 S 0.0 0.0 2:15.40 14 [kjournald]\n 2726 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 14 [kjournald]\n 2913 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 14 [rpciod/14]\n 3269 root 18 -5 0 0 0 S 0.0 0.0 0:00.00 14 [ib_cm/14]\n 8970 postgres 16 0 4327m 205m 197m S 0.2 1.3 0:01.33 14 postgres: user user_db [local] idle\n 8973 postgres 15 0 4327m 199m 191m S 0.1 1.2 0:00.37 14 postgres: user user_db [local] idle\n 8977 postgres 16 0 4328m 48m 40m S 0.7 0.3 0:00.76 14 postgres: user user_db [local] idle\n 8980 postgres 16 0 4328m 51m 43m S 0.1 0.3 0:00.50 14 postgres: user user_db [local] idle\n 8981 postgres 15 0 4327m 203m 195m S 0.0 1.3 0:00.72 14 postgres: user user_db [local] idle\n 8985 postgres 15 0 4327m 43m 36m S 0.1 0.3 0:00.29 14 postgres: user user_db [local] idle\n 8988 postgres 16 0 4328m 205m 196m S 0.0 1.3 0:00.91 14 postgres: user user_db [local] idle\n 8991 postgres 15 0 4327m 205m 197m S 0.1 1.3 0:00.79 14 postgres: user user_db [local] idle\n 8993 postgres 15 0 4328m 207m 199m S 1.9 1.3 0:00.99 14 postgres: user user_db [local] idle\n 8996 postgres 15 0 4328m 205m 196m S 1.1 1.3 0:00.93 14 postgres: user user_db [local] idle\n 9000 postgres 16 0 4328m 207m 199m S 0.7 1.3 0:00.82 14 postgres: user user_db [local] idle\n 9004 postgres 16 0 4329m 204m 194m S 0.1 1.3 0:00.69 14 postgres: user user_db [local] idle\n 9005 postgres 15 0 4327m 200m 193m S 0.7 1.2 0:00.63 14 postgres: user user_db [local] idle\n 9007 postgres 15 0 4327m 199m 192m S 0.1 1.2 0:00.49 14 postgres: user user_db [local] idle\n 9010 postgres 15 0 4327m 202m 195m S 0.2 1.3 0:00.65 14 postgres: user user_db [local] idle\n 9016 postgres 15 0 4326m 34m 28m S 0.1 0.2 0:00.15 14 postgres: user user_db [local] idle\n 9018 postgres 16 0 4327m 203m 195m S 1.0 1.3 0:00.72 14 postgres: user user_db [local] idle\n 9020 postgres 15 0 4327m 45m 37m S 0.1 0.3 0:00.49 14 postgres: user user_db [local] idle\n 9022 postgres 15 0 4327m 42m 35m S 0.1 0.3 0:00.20 14 postgres: user user_db [local] idle\n 9025 postgres 16 0 4328m 201m 193m S 0.3 1.3 0:00.75 14 postgres: user user_db [local] idle\n 9026 postgres 16 0 4327m 47m 40m S 0.1 0.3 0:00.49 14 postgres: user user_db [local] idle\n 9038 postgres 16 0 4327m 201m 193m S 0.1 1.3 0:00.70 14 postgres: user user_db [local] idle\n 9042 postgres 15 0 4327m 201m 193m S 1.8 1.3 0:00.71 14 postgres: user user_db [local] idle\n 9046 postgres 15 0 4327m 201m 193m S 0.1 1.3 0:00.65 14 postgres: user user_db [local] idle\n 9048 postgres 15 0 4327m 200m 193m S 1.4 1.2 0:00.52 14 postgres: user user_db [local] idle\n 9049 postgres 15 0 4328m 200m 192m S 0.1 1.2 0:00.50 14 postgres: user user_db [local] idle\n 9053 postgres 15 0 4327m 44m 37m S 0.1 0.3 0:00.34 14 postgres: user user_db [local] idle\n 9054 postgres 16 0 4327m 46m 40m S 0.1 0.3 0:00.43 14 postgres: user user_db [local] idle\n 9055 postgres 16 0 4328m 200m 192m S 0.0 1.3 0:00.39 14 postgres: user user_db [local] idle\n 9056 postgres 16 0 4328m 201m 192m S 0.7 1.3 0:00.75 14 postgres: user user_db [local] idle\n 9057 postgres 16 0 4327m 200m 192m S 0.2 1.3 0:00.72 14 postgres: user user_db [local] idle\n 9061 postgres 15 0 4328m 200m 192m S 0.0 1.2 0:00.49 14 postgres: user user_db [local] idle\n 9065 postgres 15 0 4328m 204m 196m S 0.3 1.3 0:00.80 14 postgres: user user_db [local] idle\n 9067 postgres 15 0 4327m 43m 35m S 0.0 0.3 0:00.30 14 postgres: user user_db [local] idle\n 9071 postgres 15 0 4327m 48m 40m S 0.1 0.3 0:00.53 14 postgres: user user_db [local] idle\n 9076 postgres 15 0 4326m 43m 36m S 0.0 0.3 0:00.61 14 postgres: user user_db [local] idle\n 9078 postgres 15 0 4328m 206m 198m S 0.0 1.3 0:00.64 14 postgres: user user_db [local] idle\n 9079 postgres 15 0 4327m 45m 38m S 0.0 0.3 0:00.37 14 postgres: user user_db [local] idle\n 9080 postgres 16 0 4327m 200m 193m S 0.0 1.3 0:00.62 14 postgres: user user_db [local] idle\n 9082 postgres 16 0 4328m 202m 193m S 1.5 1.3 0:00.84 14 postgres: user user_db [local] idle\n 9084 postgres 15 0 4327m 46m 38m S 0.0 0.3 0:00.54 14 postgres: user user_db [local] idle\n 9086 postgres 15 0 4328m 203m 194m S 0.0 1.3 0:00.38 14 postgres: user user_db [local] idle\n 9087 postgres 16 0 4327m 199m 192m S 1.0 1.2 0:00.63 14 postgres: user user_db [local] idle\n 9089 postgres 16 0 4328m 205m 196m S 0.2 1.3 0:00.87 14 postgres: user user_db [local] idle\n 9091 postgres 15 0 4327m 45m 38m S 0.1 0.3 0:00.41 14 postgres: user user_db [local] idle\n 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user user_db [local] idle\n 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user user_db [local] idle\n 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user user_db [local] idle\n13629 root 18 0 65288 280 140 S 0.0 0.0 0:00.00 14 rpc.rquotad\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 05 Jan 2013 01:07:50 +0400",
"msg_from": "=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmVbNF06IFtQRVJGT1JNXSBSZVsyXTogW1BFUkZPUk1dIFNNUCBvbiBhIGhl?=\n\t=?UTF-8?B?YXZ5IGxvYWRlZCBkYXRhYmFzZQ==?="
},
{
"msg_contents": "On Fri, Jan 4, 2013 at 6:07 PM, nobody nowhere <[email protected]> wrote:\n> 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user user_db [local] idle\n> 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user user_db [local] idle\n> 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user user_db [local] idle\n\nThat looks like pg has been pinned to CPU14. I don't think it's pg's\ndoing. All I can think of is: check scheduler tweaks, numa, and pg's\ninitscript. Just in case it's being pinned explicitly.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 4 Jan 2013 18:20:17 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re[2]: [PERFORM] SMP on a heavy loaded database"
},
{
"msg_contents": "Пятница, 4 января 2013, 18:20 -03:00 от Claudio Freire <[email protected]>:\n>On Fri, Jan 4, 2013 at 6:07 PM, nobody nowhere < [email protected] > wrote:\n>> 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user user_db [local] idle\n>> 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user user_db [local] idle\n>> 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user user_db [local] idle\n>\n>That looks like pg has been pinned to CPU14. I don't think it's pg's\n>doing. All I can think of is: check scheduler tweaks, numa, and pg's\n>initscript. Just in case it's being pinned explicitly.\nNot pinned. \nForks with tcp connection use other CPU. I just add connections pool and change socket to tcp\n\n#top -d 10.00 -b -n 2 -U postgres\ntop - 22:29:00 up 454 days, 8 min, 1 user, load average: 0.39, 0.51, 0.46\nTasks: 429 total, 1 running, 428 sleeping, 0 stopped, 0 zombie\nCpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu1 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu3 : 0.9%us, 0.1%sy, 0.0%ni, 98.9%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st\nCpu4 : 1.9%us, 0.4%sy, 0.0%ni, 97.5%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\nCpu5 : 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu6 : 2.6%us, 0.1%sy, 0.0%ni, 97.2%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu7 : 1.6%us, 0.3%sy, 0.0%ni, 98.0%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st\nCpu8 : 1.6%us, 0.3%sy, 0.0%ni, 97.9%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st\nCpu9 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu10 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu11 : 1.1%us, 0.5%sy, 0.0%ni, 98.1%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st\nCpu12 : 1.0%us, 0.0%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu13 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu14 : 18.7%us, 0.3%sy, 0.0%ni, 80.6%id, 0.3%wa, 0.0%hi, 0.1%si, 0.0%st\nCpu15 : 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.1%hi, 0.2%si, 0.0%st\nMem: 16426540k total, 16368832k used, 57708k free, 219524k buffers\nSwap: 4194232k total, 147312k used, 4046920k free, 14468220k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND\n10129 postgres 16 0 4329m 243m 233m S 1.9 1.5 0:04.05 14 postgres: user user_db [local] idle\n10198 postgres 16 0 4329m 243m 234m S 1.9 1.5 0:03.49 14 postgres: user user_db [local] idle\n10092 postgres 16 0 4330m 238m 228m S 1.7 1.5 0:03.09 14 postgres: user user_db [local] idle\n10190 postgres 15 0 4328m 234m 226m S 1.7 1.5 0:02.94 14 postgres: user user_db [local] idle\n10169 postgres 16 0 4329m 235m 225m S 1.3 1.5 0:03.22 14 postgres: user user_db [local] idle\n10102 postgres 15 0 4328m 237m 227m S 1.2 1.5 0:03.24 14 postgres: user user_db [local] idle\n10217 postgres 16 0 4329m 241m 231m S 1.2 1.5 0:04.73 14 postgres: user user_db [local] idle\n10094 postgres 15 0 4330m 244m 233m S 0.9 1.5 0:03.67 14 postgres: user user_db [local] idle\n10137 postgres 16 0 4331m 238m 227m S 0.8 1.5 0:03.14 14 postgres: user user_db [local] idle\n10149 postgres 15 0 4328m 238m 229m S 0.8 1.5 0:03.07 14 postgres: user user_db [local] idle\n10161 postgres 16 0 4331m 245m 234m S 0.8 1.5 0:03.91 6 postgres: user user_db [local] idle\n10178 postgres 16 0 4330m 245m 234m S 0.8 1.5 0:04.01 14 postgres: user user_db [local] idle\n10182 postgres 16 0 4330m 236m 227m S 0.8 1.5 0:02.38 14 postgres: user user_db [local] idle\n10189 postgres 15 0 4330m 241m 231m S 0.8 1.5 0:03.07 14 postgres: user user_db [local] idle\n10208 postgres 16 0 4329m 237m 227m S 0.8 1.5 0:03.74 14 postgres: user user_db [local] idle\n10128 postgres 16 0 4330m 240m 229m S 0.7 1.5 0:03.15 14 postgres: user user_db [local] idle\n10142 postgres 16 0 4331m 241m 230m S 0.7 1.5 0:03.23 14 postgres: user user_db [local] idle\n10194 postgres 15 0 4328m 236m 227m S 0.7 1.5 0:03.24 14 postgres: user user_db [local] idle\n 6878 postgres 15 0 4319m 2992 1472 S 0.3 0.0 44:06.10 11 postgres: wal sender process postgres XXX.XXX.XXX.XXX(47880) streaming 21D/D76286B0\n10180 postgres 16 0 4329m 240m 231m S 0.3 1.5 0:02.88 4 postgres: user user_db [local] idle\n10115 postgres 16 0 4331m 236m 225m S 0.2 1.5 0:03.53 14 postgres: user user_db [local] idle\n10162 postgres 16 0 4330m 240m 230m S 0.2 1.5 0:03.01 14 postgres: user user_db [local] idle\n10212 postgres 16 0 4329m 238m 228m S 0.2 1.5 0:03.52 14 postgres: user user_db [local] idle\n10213 postgres 15 0 4329m 238m 228m S 0.2 1.5 0:02.96 14 postgres: user user_db [local] idle\n10100 postgres 16 0 4331m 237m 226m S 0.1 1.5 0:03.39 14 postgres: user user_db [local] idle\n10112 postgres 16 0 4331m 240m 229m S 0.1 1.5 0:03.83 14 postgres: user user_db [local] idle\n10117 postgres 15 0 4329m 239m 229m S 0.1 1.5 0:04.42 14 postgres: user user_db [local] idle\n10121 postgres 16 0 4330m 240m 230m S 0.1 1.5 0:03.08 6 postgres: user user_db [local] idle\n10125 postgres 15 0 4329m 243m 233m S 0.1 1.5 0:04.90 14 postgres: user user_db [local] idle\n10127 postgres 15 0 4329m 238m 228m S 0.1 1.5 0:02.81 14 postgres: user user_db [local] idle\n10135 postgres 15 0 4329m 238m 229m S 0.1 1.5 0:03.20 14 postgres: user user_db [local] idle\n10136 postgres 16 0 4329m 237m 227m S 0.1 1.5 0:02.77 14 postgres: user user_db [local] idle\n10138 postgres 16 0 4330m 243m 232m S 0.1 1.5 0:03.46 14 postgres: user user_db [local] idle\n10139 postgres 15 0 4330m 236m 225m S 0.1 1.5 0:03.14 14 postgres: user user_db [local] idle\n10143 postgres 16 0 4330m 246m 236m S 0.1 1.5 0:02.93 14 postgres: user user_db [local] idle\n10144 postgres 16 0 4331m 237m 227m S 0.1 1.5 0:02.81 14 postgres: user user_db [local] idle\n10148 postgres 15 0 4331m 251m 240m S 0.1 1.6 0:04.07 14 postgres: user user_db [local] idle\n10165 postgres 16 0 4331m 246m 235m S 0.1 1.5 0:02.36 14 postgres: user user_db [local] idle\n10166 postgres 15 0 4330m 235m 226m S 0.1 1.5 0:02.55 14 postgres: user user_db [local] idle\n10168 postgres 15 0 4329m 234m 225m S 0.1 1.5 0:03.26 14 postgres: user user_db [local] idle\n10173 postgres 16 0 4329m 236m 226m S 0.1 1.5 0:02.82 6 postgres: user user_db [local] idle\n10174 postgres 15 0 4328m 240m 232m S 0.1 1.5 0:03.98 14 postgres: user user_db [local] idle\n10184 postgres 16 0 4328m 237m 228m S 0.1 1.5 0:02.85 14 postgres: user user_db [local] idle\n10186 postgres 15 0 4329m 239m 229m S 0.1 1.5 0:03.47 14 postgres: user user_db [local] idle\n10191 postgres 15 0 4330m 243m 233m S 0.1 1.5 0:03.69 14 postgres: user user_db [local] idle\n10195 postgres 16 0 4329m 240m 231m S 0.1 1.5 0:03.02 14 postgres: user user_db [local] idle\n10199 postgres 15 0 4331m 234m 222m S 0.1 1.5 0:02.87 14 postgres: user user_db [local] idle\n10203 postgres 15 0 4329m 234m 224m S 0.1 1.5 0:04.00 14 postgres: user user_db [local] idle\n10207 postgres 16 0 4331m 236m 225m S 0.1 1.5 0:03.52 6 postgres: user user_db [local] idle\n10210 postgres 15 0 4330m 237m 227m S 0.1 1.5 0:02.90 14 postgres: user user_db [local] idle\n10211 postgres 15 0 4330m 244m 234m S 0.1 1.5 0:03.24 14 postgres: user user_db [local] idle\n10225 postgres 16 0 4330m 237m 226m S 0.1 1.5 0:03.55 14 postgres: user user_db [local] idle\n10226 postgres 16 0 4330m 235m 224m S 0.1 1.5 0:02.59 14 postgres: user user_db [local] idle\n10227 postgres 15 0 4332m 247m 236m S 0.1 1.5 0:03.71 14 postgres: user user_db [local] idle\n10229 postgres 16 0 4329m 236m 226m S 0.1 1.5 0:02.38 14 postgres: user user_db [local] idle\n 7818 postgres 15 0 4319m 6640 4680 S 0.0 0.0 0:00.06 8 postgres: postgres user_db XXX.XXX.XXX.XXX(1032) idle\n10097 postgres 16 0 4328m 235m 226m S 0.0 1.5 0:03.25 14 postgres: user user_db [local] idle\n10114 postgres 16 0 4331m 245m 234m S 0.0 1.5 0:03.79 14 postgres: user user_db [local] idle\n10118 postgres 15 0 4328m 235m 226m S 0.0 1.5 0:03.53 14 postgres: user user_db [local] idle\n10152 postgres 15 0 4331m 241m 229m S 0.0 1.5 0:03.55 14 postgres: user user_db [local] idle\n10170 postgres 16 0 4330m 240m 229m S 0.0 1.5 0:03.19 14 postgres: user user_db [local] idle\n10185 postgres 15 0 4330m 235m 225m S 0.0 1.5 0:03.83 14 postgres: user user_db [local] idle\n10187 postgres 16 0 4330m 237m 226m S 0.0 1.5 0:03.34 14 postgres: user user_db [local] idle\n10202 postgres 16 0 4330m 234m 224m S 0.0 1.5 0:02.74 14 postgres: user user_db [local] idle\n10220 postgres 16 0 4329m 258m 248m S 0.0 1.6 0:03.85 6 postgres: user user_db [local] idle\n10223 postgres 16 0 4331m 243m 233m S 0.0 1.5 0:03.85 14 postgres: user user_db [local] idle\n14378 postgres 15 0 4320m 7324 4928 S 0.0 0.0 0:00.03 4 postgres: postgres postgres XXX.XXX.XXX.XXX(1030) idle\n14979 postgres 15 0 4316m 104m 103m S 0.0 0.6 6:54.61 8 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data\n14981 postgres 15 0 112m 1368 728 S 0.0 0.0 0:00.06 12 postgres: logger process\n14995 postgres 15 0 4320m 2.0g 2.0g S 0.0 12.7 4:49.23 15 postgres: writer process\n14996 postgres 15 0 4318m 17m 16m S 0.0 0.1 0:12.96 15 postgres: wal writer process\n14997 postgres 15 0 4319m 3312 1568 S 0.0 0.0 0:10.30 2 postgres: autovacuum launcher process\n14998 postgres 15 0 114m 1444 756 S 0.0 0.0 0:13.32 15 postgres: archiver process last was 000000010000021D000000D6\n14999 postgres 15 0 115m 1840 808 S 0.0 0.0 30:32.88 1 postgres: stats collector process\n15027 postgres 15 0 4319m 80m 78m S 0.0 0.5 32:10.90 11 postgres: monitor user_db XXX.XXX.XXX.XXX(55433) idle\n15070 postgres 15 0 4319m 82m 80m S 0.0 0.5 29:12.70 7 postgres: monitor user_db XXX.XXX.XXX.XXX(59360) idle\n15808 postgres 16 0 4324m 15m 10m S 0.0 0.1 0:00.27 7 postgres: postgres user_db XXX.XXX.XXX.XXX(1031) idle\n19598 postgres 16 0 4320m 7328 4932 S 0.0 0.0 0:00.00 15 postgres: postgres postgres XXX.XXX.XXX.XXX(59745) idle\n19599 postgres 15 0 4321m 13m 10m S 0.0 0.1 0:00.10 4 postgres: postgres user_db XXX.XXX.XXX.XXX(59746) idle\n19625 postgres 15 0 4320m 8844 6076 S 0.0 0.1 0:00.04 11 postgres: postgres user_db XXX.XXX.XXX.XXX(59768) idle\n19633 postgres 15 0 4320m 7112 4880 S 0.0 0.0 0:00.00 11 postgres: postgres postgres XXX.XXX.XXX.XXX(3586) idle\n19634 postgres 15 0 4327m 19m 9.9m S 0.0 0.1 0:00.15 11 postgres: postgres user_db XXX.XXX.XXX.XXX(3588) idle\n19639 postgres 15 0 4321m 58m 55m S 0.0 0.4 0:00.15 4 postgres: postgres user_db XXX.XXX.XXX.XXX(3612) idle\n\n\n\n\nПятница, 4 января 2013, 18:20 -03:00 от Claudio Freire <[email protected]>:\n\n\n\n\n\nOn Fri, Jan 4, 2013 at 6:07 PM, nobody nowhere <[email protected]> wrote:> 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user user_db [local] idle> 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user user_db [local] idle> 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user user_db [local] idleThat looks like pg has been pinned to CPU14. I don't think it's pg'sdoing. All I can think of is: check scheduler tweaks, numa, and pg'sinitscript. Just in case it's being pinned explicitly.\n\n\n\n\n\nNot pinned. Forks with tcp connection use other CPU. I just add connections pool and change socket to tcp#top -d 10.00 -b -n 2 -U postgrestop - 22:29:00 up 454 days, 8 min, 1 user, load average: 0.39, 0.51, 0.46Tasks: 429 total, 1 running, 428 sleeping, 0 stopped, 0 zombieCpu0 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu1 : 0.1%us, 0.0%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu3 : 0.9%us, 0.1%sy, 0.0%ni, 98.9%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%stCpu4 : 1.9%us, 0.4%sy, 0.0%ni, 97.5%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%stCpu5 : 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu6 : 2.6%us, 0.1%sy, 0.0%ni, 97.2%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%stCpu7 : 1.6%us, 0.3%sy, 0.0%ni, 98.0%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%stCpu8 : 1.6%us, 0.3%sy, 0.0%ni, 97.9%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%stCpu9 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu10 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu11 : 1.1%us, 0.5%sy, 0.0%ni, 98.1%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%stCpu12 : 1.0%us, 0.0%sy, 0.0%ni, 99.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu13 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu14 : 18.7%us, 0.3%sy, 0.0%ni, 80.6%id, 0.3%wa, 0.0%hi, 0.1%si, 0.0%stCpu15 : 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.1%hi, 0.2%si, 0.0%stMem: 16426540k total, 16368832k used, 57708k free, 219524k buffersSwap: 4194232k total, 147312k used, 4046920k free, 14468220k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ P COMMAND10129 postgres 16 0 4329m 243m 233m S 1.9 1.5 0:04.05 14 postgres: user user_db [local] idle10198 postgres 16 0 4329m 243m 234m S 1.9 1.5 0:03.49 14 postgres: user user_db [local] idle10092 postgres 16 0 4330m 238m 228m S 1.7 1.5 0:03.09 14 postgres: user user_db [local] idle10190 postgres 15 0 4328m 234m 226m S 1.7 1.5 0:02.94 14 postgres: user user_db [local] idle10169 postgres 16 0 4329m 235m 225m S 1.3 1.5 0:03.22 14 postgres: user user_db [local] idle10102 postgres 15 0 4328m 237m 227m S 1.2 1.5 0:03.24 14 postgres: user user_db [local] idle10217 postgres 16 0 4329m 241m 231m S 1.2 1.5 0:04.73 14 postgres: user user_db [local] idle10094 postgres 15 0 4330m 244m 233m S 0.9 1.5 0:03.67 14 postgres: user user_db [local] idle10137 postgres 16 0 4331m 238m 227m S 0.8 1.5 0:03.14 14 postgres: user user_db [local] idle10149 postgres 15 0 4328m 238m 229m S 0.8 1.5 0:03.07 14 postgres: user user_db [local] idle10161 postgres 16 0 4331m 245m 234m S 0.8 1.5 0:03.91 6 postgres: user user_db [local] idle10178 postgres 16 0 4330m 245m 234m S 0.8 1.5 0:04.01 14 postgres: user user_db [local] idle10182 postgres 16 0 4330m 236m 227m S 0.8 1.5 0:02.38 14 postgres: user user_db [local] idle10189 postgres 15 0 4330m 241m 231m S 0.8 1.5 0:03.07 14 postgres: user user_db [local] idle10208 postgres 16 0 4329m 237m 227m S 0.8 1.5 0:03.74 14 postgres: user user_db [local] idle10128 postgres 16 0 4330m 240m 229m S 0.7 1.5 0:03.15 14 postgres: user user_db [local] idle10142 postgres 16 0 4331m 241m 230m S 0.7 1.5 0:03.23 14 postgres: user user_db [local] idle10194 postgres 15 0 4328m 236m 227m S 0.7 1.5 0:03.24 14 postgres: user user_db [local] idle 6878 postgres 15 0 4319m 2992 1472 S 0.3 0.0 44:06.10 11 postgres: wal sender process postgres XXX.XXX.XXX.XXX(47880) streaming 21D/D76286B010180 postgres 16 0 4329m 240m 231m S 0.3 1.5 0:02.88 4 postgres: user user_db [local] idle10115 postgres 16 0 4331m 236m 225m S 0.2 1.5 0:03.53 14 postgres: user user_db [local] idle10162 postgres 16 0 4330m 240m 230m S 0.2 1.5 0:03.01 14 postgres: user user_db [local] idle10212 postgres 16 0 4329m 238m 228m S 0.2 1.5 0:03.52 14 postgres: user user_db [local] idle10213 postgres 15 0 4329m 238m 228m S 0.2 1.5 0:02.96 14 postgres: user user_db [local] idle10100 postgres 16 0 4331m 237m 226m S 0.1 1.5 0:03.39 14 postgres: user user_db [local] idle10112 postgres 16 0 4331m 240m 229m S 0.1 1.5 0:03.83 14 postgres: user user_db [local] idle10117 postgres 15 0 4329m 239m 229m S 0.1 1.5 0:04.42 14 postgres: user user_db [local] idle10121 postgres 16 0 4330m 240m 230m S 0.1 1.5 0:03.08 6 postgres: user user_db [local] idle10125 postgres 15 0 4329m 243m 233m S 0.1 1.5 0:04.90 14 postgres: user user_db [local] idle10127 postgres 15 0 4329m 238m 228m S 0.1 1.5 0:02.81 14 postgres: user user_db [local] idle10135 postgres 15 0 4329m 238m 229m S 0.1 1.5 0:03.20 14 postgres: user user_db [local] idle10136 postgres 16 0 4329m 237m 227m S 0.1 1.5 0:02.77 14 postgres: user user_db [local] idle10138 postgres 16 0 4330m 243m 232m S 0.1 1.5 0:03.46 14 postgres: user user_db [local] idle10139 postgres 15 0 4330m 236m 225m S 0.1 1.5 0:03.14 14 postgres: user user_db [local] idle10143 postgres 16 0 4330m 246m 236m S 0.1 1.5 0:02.93 14 postgres: user user_db [local] idle10144 postgres 16 0 4331m 237m 227m S 0.1 1.5 0:02.81 14 postgres: user user_db [local] idle10148 postgres 15 0 4331m 251m 240m S 0.1 1.6 0:04.07 14 postgres: user user_db [local] idle10165 postgres 16 0 4331m 246m 235m S 0.1 1.5 0:02.36 14 postgres: user user_db [local] idle10166 postgres 15 0 4330m 235m 226m S 0.1 1.5 0:02.55 14 postgres: user user_db [local] idle10168 postgres 15 0 4329m 234m 225m S 0.1 1.5 0:03.26 14 postgres: user user_db [local] idle10173 postgres 16 0 4329m 236m 226m S 0.1 1.5 0:02.82 6 postgres: user user_db [local] idle10174 postgres 15 0 4328m 240m 232m S 0.1 1.5 0:03.98 14 postgres: user user_db [local] idle10184 postgres 16 0 4328m 237m 228m S 0.1 1.5 0:02.85 14 postgres: user user_db [local] idle10186 postgres 15 0 4329m 239m 229m S 0.1 1.5 0:03.47 14 postgres: user user_db [local] idle10191 postgres 15 0 4330m 243m 233m S 0.1 1.5 0:03.69 14 postgres: user user_db [local] idle10195 postgres 16 0 4329m 240m 231m S 0.1 1.5 0:03.02 14 postgres: user user_db [local] idle10199 postgres 15 0 4331m 234m 222m S 0.1 1.5 0:02.87 14 postgres: user user_db [local] idle10203 postgres 15 0 4329m 234m 224m S 0.1 1.5 0:04.00 14 postgres: user user_db [local] idle10207 postgres 16 0 4331m 236m 225m S 0.1 1.5 0:03.52 6 postgres: user user_db [local] idle10210 postgres 15 0 4330m 237m 227m S 0.1 1.5 0:02.90 14 postgres: user user_db [local] idle10211 postgres 15 0 4330m 244m 234m S 0.1 1.5 0:03.24 14 postgres: user user_db [local] idle10225 postgres 16 0 4330m 237m 226m S 0.1 1.5 0:03.55 14 postgres: user user_db [local] idle10226 postgres 16 0 4330m 235m 224m S 0.1 1.5 0:02.59 14 postgres: user user_db [local] idle10227 postgres 15 0 4332m 247m 236m S 0.1 1.5 0:03.71 14 postgres: user user_db [local] idle10229 postgres 16 0 4329m 236m 226m S 0.1 1.5 0:02.38 14 postgres: user user_db [local] idle 7818 postgres 15 0 4319m 6640 4680 S 0.0 0.0 0:00.06 8 postgres: postgres user_db XXX.XXX.XXX.XXX(1032) idle10097 postgres 16 0 4328m 235m 226m S 0.0 1.5 0:03.25 14 postgres: user user_db [local] idle10114 postgres 16 0 4331m 245m 234m S 0.0 1.5 0:03.79 14 postgres: user user_db [local] idle10118 postgres 15 0 4328m 235m 226m S 0.0 1.5 0:03.53 14 postgres: user user_db [local] idle10152 postgres 15 0 4331m 241m 229m S 0.0 1.5 0:03.55 14 postgres: user user_db [local] idle10170 postgres 16 0 4330m 240m 229m S 0.0 1.5 0:03.19 14 postgres: user user_db [local] idle10185 postgres 15 0 4330m 235m 225m S 0.0 1.5 0:03.83 14 postgres: user user_db [local] idle10187 postgres 16 0 4330m 237m 226m S 0.0 1.5 0:03.34 14 postgres: user user_db [local] idle10202 postgres 16 0 4330m 234m 224m S 0.0 1.5 0:02.74 14 postgres: user user_db [local] idle10220 postgres 16 0 4329m 258m 248m S 0.0 1.6 0:03.85 6 postgres: user user_db [local] idle10223 postgres 16 0 4331m 243m 233m S 0.0 1.5 0:03.85 14 postgres: user user_db [local] idle14378 postgres 15 0 4320m 7324 4928 S 0.0 0.0 0:00.03 4 postgres: postgres postgres XXX.XXX.XXX.XXX(1030) idle14979 postgres 15 0 4316m 104m 103m S 0.0 0.6 6:54.61 8 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data14981 postgres 15 0 112m 1368 728 S 0.0 0.0 0:00.06 12 postgres: logger process14995 postgres 15 0 4320m 2.0g 2.0g S 0.0 12.7 4:49.23 15 postgres: writer process14996 postgres 15 0 4318m 17m 16m S 0.0 0.1 0:12.96 15 postgres: wal writer process14997 postgres 15 0 4319m 3312 1568 S 0.0 0.0 0:10.30 2 postgres: autovacuum launcher process14998 postgres 15 0 114m 1444 756 S 0.0 0.0 0:13.32 15 postgres: archiver process last was 000000010000021D000000D614999 postgres 15 0 115m 1840 808 S 0.0 0.0 30:32.88 1 postgres: stats collector process15027 postgres 15 0 4319m 80m 78m S 0.0 0.5 32:10.90 11 postgres: monitor user_db XXX.XXX.XXX.XXX(55433) idle15070 postgres 15 0 4319m 82m 80m S 0.0 0.5 29:12.70 7 postgres: monitor user_db XXX.XXX.XXX.XXX(59360) idle15808 postgres 16 0 4324m 15m 10m S 0.0 0.1 0:00.27 7 postgres: postgres user_db XXX.XXX.XXX.XXX(1031) idle19598 postgres 16 0 4320m 7328 4932 S 0.0 0.0 0:00.00 15 postgres: postgres postgres XXX.XXX.XXX.XXX(59745) idle19599 postgres 15 0 4321m 13m 10m S 0.0 0.1 0:00.10 4 postgres: postgres user_db XXX.XXX.XXX.XXX(59746) idle19625 postgres 15 0 4320m 8844 6076 S 0.0 0.1 0:00.04 11 postgres: postgres user_db XXX.XXX.XXX.XXX(59768) idle19633 postgres 15 0 4320m 7112 4880 S 0.0 0.0 0:00.00 11 postgres: postgres postgres XXX.XXX.XXX.XXX(3586) idle19634 postgres 15 0 4327m 19m 9.9m S 0.0 0.1 0:00.15 11 postgres: postgres user_db XXX.XXX.XXX.XXX(3588) idle19639 postgres 15 0 4321m 58m 55m S 0.0 0.4 0:00.15 4 postgres: postgres user_db XXX.XXX.XXX.XXX(3612) idle",
"msg_date": "Sat, 05 Jan 2013 01:38:12 +0400",
"msg_from": "=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmVbNl06IFtQRVJGT1JNXSBSZVsyXTogW1BFUkZPUk1dIFNNUCBvbiBhIGhl?=\n\t=?UTF-8?B?YXZ5IGxvYWRlZCBkYXRhYmFzZQ==?="
},
{
"msg_contents": "On Fri, Jan 4, 2013 at 6:38 PM, nobody nowhere <[email protected]> wrote:\n> On Fri, Jan 4, 2013 at 6:07 PM, nobody nowhere <[email protected]> wrote:\n>> 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user\n>> user_db [local] idle\n>> 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user\n>> user_db [local] idle\n>> 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user\n>> user_db [local] idle\n>\n> That looks like pg has been pinned to CPU14. I don't think it's pg's\n> doing. All I can think of is: check scheduler tweaks, numa, and pg's\n> initscript. Just in case it's being pinned explicitly.\n>\n> Not pinned.\n> Forks with tcp connection use other CPU. I just add connections pool and\n> change socket to tcp\n\nHow interesting. It must be a peculiarity of unix sockets. I know unix\nsockets have close to no buffering, task-switching to the consumer\ninstead of buffering. Perhaps what you're experiencing here is this\n\"optimization\" effect. It's probably not harmful at all. The OS will\nswitch to another CPU if the need arises.\n\nHave you done any stress testing? Is there any actual performance impact?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 4 Jan 2013 18:53:17 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re[2]: [PERFORM] SMP on a heavy loaded database"
},
{
"msg_contents": "=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <[email protected]> writes:\n> [ all postgres processes seem to be pinned to CPU 14 ]\n\nI wonder whether this is a \"benefit\" of sched_autogroup_enabled?\n\nhttp://archives.postgresql.org/message-id/[email protected]\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 04 Jan 2013 18:01:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?B?UmVbNF06IFtQRVJGT1JNXSBSZVsyXTogW1BFUkZPUk1dIFNNUCBvbiBhIGhl?=\n\t=?UTF-8?B?YXZ5IGxvYWRlZCBkYXRhYmFzZQ==?="
},
{
"msg_contents": "\n\n\nПятница, 4 января 2013, 18:53 -03:00 от Claudio Freire <[email protected]>:\n>On Fri, Jan 4, 2013 at 6:38 PM, nobody nowhere < [email protected] > wrote:\n>> On Fri, Jan 4, 2013 at 6:07 PM, nobody nowhere < [email protected] > wrote:\n>>> 9092 postgres 16 0 4326m 41m 34m S 0.0 0.3 0:00.27 14 postgres: user\n>>> user_db [local] idle\n>>> 9098 postgres 16 0 4329m 203m 194m S 3.5 1.3 0:00.65 14 postgres: user\n>>> user_db [local] idle\n>>> 9099 postgres 16 0 4327m 45m 38m S 0.0 0.3 0:00.41 14 postgres: user\n>>> user_db [local] idle\n>>\n>> That looks like pg has been pinned to CPU14. I don't think it's pg's\n>> doing. All I can think of is: check scheduler tweaks, numa, and pg's\n>> initscript. Just in case it's being pinned explicitly.\n>>\n>> Not pinned.\n>> Forks with tcp connection use other CPU. I just add connections pool and\n>> change socket to tcp\n>\n>How interesting. It must be a peculiarity of unix sockets. I know unix\n>sockets have close to no buffering, task-switching to the consumer\n>instead of buffering. Perhaps what you're experiencing here is this\n>\"optimization\" effect. It's probably not harmful at all. The OS will\n>switch to another CPU if the need arises. It's not socket problem. \n\nThs same result when I change php fast-cgi connection to tcp, \nRemote clients over tcp use insert-delete. Just data collection. Nothing more.\nLocally php its lot of PL data processing functions. It's PL problem !!\n\n>\n>\n>Have you done any stress testing? Is there any actual performance impact?\n\nOn my experience stress testing and real production perfomance usually absolutely different. :)\nNo application development going together with business growing. We just add functional to the system step by step.\nFor a last couple month we just grow up quickly and I decide to check performance :(\n\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 05 Jan 2013 12:37:49 +0400",
"msg_from": "=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmVbMl06IFtQRVJGT1JNXSBSZVsyXTogW1BFUkZPUk1dIFNNUCBvbiBhIGhl?=\n\t=?UTF-8?B?YXZ5IGxvYWRlZCBkYXRhYmFzZQ==?="
},
{
"msg_contents": "> > [ all postgres processes seem to be pinned to CPU 14 ]\n> \n> I wonder whether this is a \"benefit\" of sched_autogroup_enabled?\n> \n> http://archives.postgresql.org/message-id/[email protected]\n> \n> \t\t\tregards, tom lane\n\nThanks Lane\n\nRHEL 5.x \n:(\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 05 Jan 2013 12:53:22 +0400",
"msg_from": "=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmVbMl06IFtQRVJGT1JNXSBSZVs0XTogW1BFUkZPUk1dIFJlWzJdOiBbUEVS?=\n\t=?UTF-8?B?Rk9STV0gU01QIG9uIGEgaGVhdnkgbG9hZGVkIGRhdGFiYXNl?="
},
{
"msg_contents": "Fixed by \n\nsynchronous_commit = off\n\n\nСуббота, 5 января 2013, 12:53 +04:00 от nobody nowhere <[email protected]>:\n> > > [ all postgres processes seem to be pinned to CPU 14 ]\n> > \n> > I wonder whether this is a \"benefit\" of sched_autogroup_enabled?\n> > \n> > http://archives.postgresql.org/message-id/[email protected]\n> > \n> > \t\t\tregards, tom lane\n> \n> Thanks Lane\n> \n> RHEL 5.x \n> :(\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 07 Jan 2013 22:10:17 +0400",
"msg_from": "=?UTF-8?B?bm9ib2R5IG5vd2hlcmU=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmU6IFNNUCBvbiBhIGhlYXZ5IGxvYWRlZCBkYXRhYmFzZSBGSVhFRCAhISEh?="
}
] |
[
{
"msg_contents": "Hi,\n\nUpdated information in this post.\n\nI have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.\nThe pgsql base directory is in a ZFS dataset.\n\nI\n have noticed the performance is sub-optimal, but I know the default \nsetting should be the most safest one to be use (without possible data \ncorruption/loss).\n\na) I use OTRS ticketing system version 3.1, the backend is PostgreSQL.\nThe user interactive response is not slow (switching web pages or create a change).\n\nb) There is a benchmark in the support module of OTRS.\nIt tested insert,update,select and delete performance.\nThe response time is slow (>10 sec), except select.\n\nI have done some research on web, with below settings (just one change, not both), the performance returned to normal:\n\n1) Disabled sync in the pgsql dataset in ZFS\nzfs set sync=disabled mydata/pgsql\nor \n2) In\n postgresql.conf, set synchronous_commit from on to off\n\nI know the above setting would lead to data loss (e.g.power goes off), any comments?\n\nPS:\n1) I have tried to use primarycache/secondarycache=metadata/none, it do not seem to help.\n\n2)\nI have tried the default setting on Linux too:\nRHEL 6.3, ext4, stock postgresql 8.x, OTRS 3.1.\nThe web site is responsive and the benchmark result is more or less the same as FreeBSD with the 'sync' turned off.\n\n3)\nFor FreeBSD, same setting with Postgresql on UFS:\nThe performance is between ZFS (default, sync enabled) and ZFS (sync disabled).\n\nThanks,\nPatrick\n\n--- On Mon, 1/7/13, Patrick Dung <[email protected]> wrote:\n\nFrom: Patrick Dung <[email protected]>\nSubject: Sub optimal performance with default setting of Postgresql with FreeBSD 9.1 on ZFS\nTo: [email protected]\nDate: Monday, January 7, 2013, 11:32 PM\n\nHi,\n\nI have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.\nThe pgsql base directory is in a ZFS dataset.\n\nI have noticed the performance is sub-optimal, but I know the default setting should be the most safest one to be use (without possible data corruption/loss).\n\na) I use OTRS ticketing system, the backend is PostgreSQL.\nThe user interactive response is not slow (switching web pages or create a change).\n\nb) There is a benchmark in the support module of OTRS.\nIt tested insert,update,select and delete performance.\nThe response time is slow (>10 sec), except select.\n\nI have done some research on web, with below settings (just one change, not both), the performance returned to normal:\n\n1) Disabled sync in the pgsql dataset in ZFS\nzfs set sync=disabled mydata/pgsql\nor \n2) In\n postgresql.conf, set synchronous_commit from on to off\n\nI know the above setting would lead to data loss (e.g.power goes off), any comments?\n\nPS:\n1) I have tried to use primarycache/secondarycache=metadata/none, it do not seem to help.\n\n2)\nI have tried the default setting on Linux too:\nRHEL 6.3, stock postgresql 8.x, OTRS 3.1.\nThe web site is responsive and the benchmark result is more or less the same as FreeBSD with the 'sync' turned off.\n\nThanks,\nPatrick\n\nHi,Updated information in this post.I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.The pgsql base directory is in a ZFS dataset.I\n have noticed the performance is sub-optimal, but I know the default \nsetting should be the most safest one to be use (without possible data \ncorruption/loss).a) I use OTRS ticketing system version 3.1, the backend is PostgreSQL.The user interactive response is not slow (switching web pages or create a change).b) There is a benchmark in the support module of OTRS.It tested insert,update,select and delete performance.The response time is slow (>10 sec), except select.I have done some research on web, with below settings (just one change, not both), the performance returned to normal:1) Disabled sync in the pgsql dataset in ZFSzfs set sync=disabled mydata/pgsqlor 2) In\n postgresql.conf, set synchronous_commit from on to offI know the above setting would lead to data loss (e.g.power goes off), any comments?PS:1) I have tried to use primarycache/secondarycache=metadata/none, it do not seem to help.2)I have tried the default setting on Linux too:RHEL 6.3, ext4, stock postgresql 8.x, OTRS 3.1.The web site is responsive and the benchmark result is more or less the same as FreeBSD with the 'sync' turned off.3)For FreeBSD, same setting with Postgresql on UFS:The performance is between ZFS (default, sync enabled) and ZFS (sync disabled).Thanks,Patrick--- On Mon, 1/7/13, Patrick Dung <[email protected]> wrote:From: Patrick Dung <[email protected]>Subject: Sub optimal performance with default setting of\n Postgresql with FreeBSD 9.1 on ZFSTo: [email protected]: Monday, January 7, 2013, 11:32 PMHi,I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.The pgsql base directory is in a ZFS dataset.I have noticed the performance is sub-optimal, but I know the default setting should be the most safest one to be use (without possible data corruption/loss).a) I use OTRS ticketing system, the backend is PostgreSQL.The user interactive response is not slow (switching web pages or create a change).b) There is a benchmark in the support module of OTRS.It tested insert,update,select and delete performance.The response time is slow (>10 sec), except select.I have done some research on web, with below settings (just one change, not both),\n the performance returned to normal:1) Disabled sync in the pgsql dataset in ZFSzfs set sync=disabled mydata/pgsqlor 2) In\n postgresql.conf, set synchronous_commit from on to offI know the above setting would lead to data loss (e.g.power goes off), any comments?PS:1) I have tried to use primarycache/secondarycache=metadata/none, it do not seem to help.2)I have tried the default setting on Linux too:RHEL 6.3, stock postgresql 8.x, OTRS 3.1.The web site is responsive and the benchmark result is more or less the same as FreeBSD with the 'sync' turned off.Thanks,Patrick",
"msg_date": "Tue, 8 Jan 2013 01:18:02 +0800 (SGT)",
"msg_from": "Patrick Dung <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sub optimal performance with default setting of Postgresql with\n\tFreeBSD 9.1 on ZFS"
},
{
"msg_contents": "Hi Patrick,\n\nYou really need a flash ZIL with ZFS to handle syncs effectively. Setting\nthe sync_commit to off is the best you can do without it. Not that that\nis bad, we do that here as well.\n\nRegards,\nKen\n\nOn Tue, Jan 08, 2013 at 01:18:02AM +0800, Patrick Dung wrote:\n> Hi,\n> \n> Updated information in this post.\n> \n> I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.\n> The pgsql base directory is in a ZFS dataset.\n> \n> I\n> have noticed the performance is sub-optimal, but I know the default \n> setting should be the most safest one to be use (without possible data \n> corruption/loss).\n> \n> a) I use OTRS ticketing system version 3.1, the backend is PostgreSQL.\n> The user interactive response is not slow (switching web pages or create a change).\n> \n> b) There is a benchmark in the support module of OTRS.\n> It tested insert,update,select and delete performance.\n> The response time is slow (>10 sec), except select.\n> \n> I have done some research on web, with below settings (just one change, not both), the performance returned to normal:\n> \n> 1) Disabled sync in the pgsql dataset in ZFS\n> zfs set sync=disabled mydata/pgsql\n> or \n> 2) In\n> postgresql.conf, set synchronous_commit from on to off\n> \n> I know the above setting would lead to data loss (e.g.power goes off), any comments?\n> \n> PS:\n> 1) I have tried to use primarycache/secondarycache=metadata/none, it do not seem to help.\n> \n> 2)\n> I have tried the default setting on Linux too:\n> RHEL 6.3, ext4, stock postgresql 8.x, OTRS 3.1.\n> The web site is responsive and the benchmark result is more or less the same as FreeBSD with the 'sync' turned off.\n> \n> 3)\n> For FreeBSD, same setting with Postgresql on UFS:\n> The performance is between ZFS (default, sync enabled) and ZFS (sync disabled).\n> \n> Thanks,\n> Patrick\n> \n> --- On Mon, 1/7/13, Patrick Dung <[email protected]> wrote:\n> \n> From: Patrick Dung <[email protected]>\n> Subject: Sub optimal performance with default setting of Postgresql with FreeBSD 9.1 on ZFS\n> To: [email protected]\n> Date: Monday, January 7, 2013, 11:32 PM\n> \n> Hi,\n> \n> I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.\n> The pgsql base directory is in a ZFS dataset.\n> \n> I have noticed the performance is sub-optimal, but I know the default setting should be the most safest one to be use (without possible data corruption/loss).\n> \n> a) I use OTRS ticketing system, the backend is PostgreSQL.\n> The user interactive response is not slow (switching web pages or create a change).\n> \n> b) There is a benchmark in the support module of OTRS.\n> It tested insert,update,select and delete performance.\n> The response time is slow (>10 sec), except select.\n> \n> I have done some research on web, with below settings (just one change, not both), the performance returned to normal:\n> \n> 1) Disabled sync in the pgsql dataset in ZFS\n> zfs set sync=disabled mydata/pgsql\n> or \n> 2) In\n> postgresql.conf, set synchronous_commit from on to off\n> \n> I know the above setting would lead to data loss (e.g.power goes off), any comments?\n> \n> PS:\n> 1) I have tried to use primarycache/secondarycache=metadata/none, it do not seem to help.\n> \n> 2)\n> I have tried the default setting on Linux too:\n> RHEL 6.3, stock postgresql 8.x, OTRS 3.1.\n> The web site is responsive and the benchmark result is more or less the same as FreeBSD with the 'sync' turned off.\n> \n> Thanks,\n> Patrick\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 7 Jan 2013 12:28:11 -0600",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sub optimal performance with default setting of\n\tPostgresql with FreeBSD 9.1 on ZFS"
},
{
"msg_contents": "Hi Ken,\n\nThanks for reply.\nAfter researching, I get more understanding with the importance of the ZIL\nhttp://constantin.glez.de/blog/2010/07/solaris-zfs-synchronous-writes-and-zil-explained\nSo the performance issue is the ZIL... \n\nBTW, using a standard UFS+software update is still slower than Linux or ZFS without sync.\n\nBest regards,\nPatrick\n\n--- On Tue, 1/8/13, [email protected] <[email protected]> wrote:\n\nFrom: [email protected] <[email protected]>\nSubject: Re: [PERFORM] Sub optimal performance with default setting of Postgresql with FreeBSD 9.1 on ZFS\nTo: \"Patrick Dung\" <[email protected]>\nCc: [email protected]\nDate: Tuesday, January 8, 2013, 2:28 AM\n\nHi Patrick,\n\nYou really need a flash ZIL with ZFS to handle syncs effectively. Setting\nthe sync_commit to off is the best you can do without it. Not that that\nis bad, we do that here as well.\n\nRegards,\nKen\n\nOn Tue, Jan 08, 2013 at 01:18:02AM +0800, Patrick Dung wrote:\n> Hi,\n> \n> Updated information in this post.\n> \n> I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.\n> The pgsql base directory is in a ZFS dataset.\n> \n> I\n> have noticed the performance is sub-optimal, but I know the default \n> setting should be the most safest one to be use (without possible data \n> corruption/loss).\n> \n> a) I use OTRS ticketing system version 3.1, the backend is PostgreSQL.\n> The user interactive response is not slow (switching web pages or create a change).\n> \n> b) There is a benchmark in the support module of OTRS.\n> It tested insert,update,select and delete performance.\n> The response time is slow (>10 sec), except select.\n> \n> I have done some research on web, with below settings (just one change, not both), the performance returned to normal:\n> \n> 1) Disabled sync in the pgsql dataset in ZFS\n> zfs set sync=disabled mydata/pgsql\n> or \n> 2) In\n> postgresql.conf, set synchronous_commit from on to off\n> \n> I know the above setting would lead to data loss (e.g.power goes off), any comments?\n> \n> PS:\n> 1) I have tried to use primarycache/secondarycache=metadata/none, it do not seem to help.\n> \n> 2)\n> I have tried the default setting on Linux too:\n> RHEL 6.3, ext4, stock postgresql 8.x, OTRS 3.1.\n> The web site is responsive and the benchmark result is more or less the same as FreeBSD with the 'sync' turned off.\n> \n> 3)\n> For FreeBSD, same setting with Postgresql on UFS:\n> The performance is between ZFS (default, sync enabled) and ZFS (sync disabled).\n> \n> Thanks,\n> Patrick\n> \n> --- On Mon, 1/7/13, Patrick Dung <[email protected]> wrote:\n> \n> From: Patrick Dung <[email protected]>\n> Subject: Sub optimal performance with default setting of Postgresql with FreeBSD 9.1 on ZFS\n> To: [email protected]\n> Date: Monday, January 7, 2013, 11:32 PM\n> \n> Hi,\n> \n> I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.\n> The pgsql base directory is in a ZFS dataset.\n> \n> I have noticed the performance is sub-optimal, but I know the default setting should be the most safest one to be use (without possible data corruption/loss).\n> \n> a) I use OTRS ticketing system, the backend is PostgreSQL.\n> The user interactive response is not slow (switching web pages or create a change).\n> \n> b) There is a benchmark in the support module of OTRS.\n> It tested insert,update,select and delete performance.\n> The response time is slow (>10 sec), except select.\n> \n> I have done some research on web, with below settings (just one change, not both), the performance returned to normal:\n> \n> 1) Disabled sync in the pgsql dataset in ZFS\n> zfs set sync=disabled mydata/pgsql\n> or \n> 2) In\n> postgresql.conf, set synchronous_commit from on to off\n> \n> I know the above setting would lead to data loss (e.g.power goes off), any comments?\n> \n> PS:\n> 1) I have tried to use primarycache/secondarycache=metadata/none, it do not seem to help.\n> \n> 2)\n> I have tried the default setting on Linux too:\n> RHEL 6.3, stock postgresql 8.x, OTRS 3.1.\n> The web site is responsive and the benchmark result is more or less the same as FreeBSD with the 'sync' turned off.\n> \n> Thanks,\n> Patrick\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\nHi Ken,Thanks for reply.After researching, I get more understanding with the importance of the ZILhttp://constantin.glez.de/blog/2010/07/solaris-zfs-synchronous-writes-and-zil-explainedSo the performance issue is the ZIL... BTW, using a standard UFS+software update is still slower than Linux or ZFS without sync.Best regards,Patrick--- On Tue, 1/8/13, [email protected] <[email protected]> wrote:From: [email protected] <[email protected]>Subject: Re: [PERFORM] Sub optimal performance with default setting of Postgresql with FreeBSD 9.1 on ZFSTo: \"Patrick Dung\" <[email protected]>Cc: [email protected]: Tuesday, January 8, 2013, 2:28 AMHi Patrick,You really need a flash ZIL with ZFS to handle syncs effectively. Settingthe sync_commit to off is the best you can do without it. Not that thatis bad, we do that here as well.Regards,KenOn Tue, Jan 08, 2013 at 01:18:02AM +0800, Patrick Dung wrote:> Hi,> > Updated information in this post.> > I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1 i386.> The pgsql base directory is in a ZFS dataset.> > I> have noticed the performance is sub-optimal, but I know the default > setting should be the most safest one to be use (without possible data > corruption/loss).> > a) I use OTRS ticketing system version 3.1, the backend is PostgreSQL.> The user interactive response is not slow (switching web pages or create a change).> > b) There is a benchmark in the support\n module of OTRS.> It tested insert,update,select and delete performance.> The response time is slow (>10 sec), except select.> > I have done some research on web, with below settings (just one change, not both), the performance returned to normal:> > 1) Disabled sync in the pgsql dataset in ZFS> zfs set sync=disabled mydata/pgsql> or > 2) In> postgresql.conf, set synchronous_commit from on to off> > I know the above setting would lead to data loss (e.g.power goes off), any comments?> > PS:> 1) I have tried to use primarycache/secondarycache=metadata/none, it do not seem to help.> > 2)> I have tried the default setting on Linux too:> RHEL 6.3, ext4, stock postgresql 8.x, OTRS 3.1.> The web site is responsive and the benchmark result is more or less the same as FreeBSD with the 'sync' turned off.>\n > 3)> For FreeBSD, same setting with Postgresql on UFS:> The performance is between ZFS (default, sync enabled) and ZFS (sync disabled).> > Thanks,> Patrick> > --- On Mon, 1/7/13, Patrick Dung <[email protected]> wrote:> > From: Patrick Dung <[email protected]>> Subject: Sub optimal performance with default setting of Postgresql with FreeBSD 9.1 on ZFS> To: [email protected]> Date: Monday, January 7, 2013, 11:32 PM> > Hi,> > I have installed Postgresql 9.2.2 (complied by gcc) in FreeBSD 9.1\n i386.> The pgsql base directory is in a ZFS dataset.> > I have noticed the performance is sub-optimal, but I know the default setting should be the most safest one to be use (without possible data corruption/loss).> > a) I use OTRS ticketing system, the backend is PostgreSQL.> The user interactive response is not slow (switching web pages or create a change).> > b) There is a benchmark in the support module of OTRS.> It tested insert,update,select and delete performance.> The response time is slow (>10 sec), except select.> > I have done some research on web, with below settings (just one change, not both), the performance returned to normal:> > 1) Disabled sync in the pgsql dataset in ZFS> zfs set sync=disabled mydata/pgsql> or > 2) In> postgresql.conf, set synchronous_commit from on to off> > I know the\n above setting would lead to data loss (e.g.power goes off), any comments?> > PS:> 1) I have tried to use primarycache/secondarycache=metadata/none, it do not seem to help.> > 2)> I have tried the default setting on Linux too:> RHEL 6.3, stock postgresql 8.x, OTRS 3.1.> The web site is responsive and the benchmark result is more or less the same as FreeBSD with the 'sync' turned off.> > Thanks,> Patrick-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 9 Jan 2013 00:26:50 +0800 (SGT)",
"msg_from": "Patrick Dung <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sub optimal performance with default setting of Postgresql with\n\tFreeBSD 9.1 on ZFS"
}
] |
[
{
"msg_contents": "Is there a way to force a WAL flush so that async commits (from other \nconnections) are flushed, short of actually updating a sacrificial row?\n\nWould be nice to do it without generating anything extra, even if it is \nsomething that causes IO in the checkpoint.\n\nAm I right to think that an empty transaction won't do it, and nor will \na transaction that is just a NOTIFY?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 07 Jan 2013 21:49:42 +0000",
"msg_from": "james <[email protected]>",
"msg_from_op": true,
"msg_subject": "Forcing WAL flush"
},
{
"msg_contents": "\nLe 2013-01-07 à 16:49, james a écrit :\n\n> Is there a way to force a WAL flush so that async commits (from other connections) are flushed, short of actually updating a sacrificial row?\n> \n> Would be nice to do it without generating anything extra, even if it is something that causes IO in the checkpoint.\n> \n> Am I right to think that an empty transaction won't do it, and nor will a transaction that is just a NOTIFY?\n\nDoes pg_start_backup() trigger a full WAL flush?\n\nhttp://www.postgresql.org/docs/9.2/static/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP\nhttp://www.postgresql.org/docs/9.2/static/functions-admin.html#FUNCTIONS-ADMIN-BACKUP-TABLE\n\nBye,\nFrançois\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 7 Jan 2013 16:55:39 -0500",
"msg_from": "=?iso-8859-1?Q?Fran=E7ois_Beausoleil?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing WAL flush"
},
{
"msg_contents": "> Le 2013-01-07 � 16:49, james a �crit :\n>\n>> Is there a way to force a WAL flush so that async commits (from other connections) are flushed, short of actually updating a sacrificial row?\n>>\n>> Would be nice to do it without generating anything extra, even if it is something that causes IO in the checkpoint.\n>>\n>> Am I right to think that an empty transaction won't do it, and nor will a transaction that is just a NOTIFY?\n>\n> Does pg_start_backup() trigger a full WAL flush?\n>\n> http://www.postgresql.org/docs/9.2/static/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP\n> http://www.postgresql.org/docs/9.2/static/functions-admin.html#FUNCTIONS-ADMIN-BACKUP-TABLE\n>\n> Bye,\n> Fran�ois\n\nThat sounds rather heavyweight!\n\nI'm looking for something lightweight - I might call this rather often, \nas a sort of application-level group commit where I commit async but \ndefer the ack to the requester (or other externally visible side \neffects) slightly until some other thread forces a flush.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 07 Jan 2013 22:05:38 +0000",
"msg_from": "james <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing WAL flush"
},
{
"msg_contents": "On Mon, Jan 7, 2013 at 1:49 PM, james <[email protected]> wrote:\n> Is there a way to force a WAL flush so that async commits (from other\n> connections) are flushed, short of actually updating a sacrificial row?\n>\n> Would be nice to do it without generating anything extra, even if it is\n> something that causes IO in the checkpoint.\n>\n> Am I right to think that an empty transaction won't do it, and nor will a\n> transaction that is just a NOTIFY?\n\nThis was discussed in \"[HACKERS] Pg_upgrade speed for many tables\".\n\nIt seemed like turning synchronous_commit back on and then creating an\ntemp table was the preferred method to force a flush. Although I\nwonder if that behavior might be optimized away at some point.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 7 Jan 2013 15:56:51 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing WAL flush"
}
] |
[
{
"msg_contents": "Hi!\n\nSmall query run on 9.0 very fast:\n\nSELECT * from sygma_arrear sar where sar.arrear_import_id = (\n select sa.arrear_import_id from sygma_arrear sa, arrear_import ai\n where sa.arrear_flag_id = 2\n AND sa.arrear_import_id = ai.id\n AND ai.import_type_id = 1\n order by report_date desc limit 1)\n AND sar.arrear_flag_id = 2\n AND sar.credit_id = 3102309\n\n\"Index Scan using sygma_arrear_credit_id on sygma_arrear sar\n(cost=0.66..362.03 rows=1 width=265)\"\n\" Index Cond: (credit_id = 3102309)\"\n\" Filter: ((arrear_import_id = $0) AND (arrear_flag_id = 2))\"\n\" InitPlan 1 (returns $0)\"\n\" -> Limit (cost=0.00..0.66 rows=1 width=8)\"\n\" -> Nested Loop (cost=0.00..3270923.14 rows=4930923 width=8)\"\n\" -> Index Scan Backward using report_date_bank_id_key\non arrear_import ai (cost=0.00..936.87 rows=444 width=8)\"\n\" Filter: (import_type_id = 1)\"\n*\" -> Index Scan using sygma_arrear_arrear_import_id_idx\non sygma_arrear sa (cost=0.00..6971.15 rows=31495 width=4)\"**\n**\" Index Cond: (sa.arrear_import_id = ai.id)\"**\n**\" Filter: (sa.arrear_flag_id = 2)\"**\n*\nEngine uses index - great.\n\nOn 9.2\n\n\"Index Scan using sygma_arrear_credit_id on sygma_arrear sar\n(cost=11.05..381.12 rows=1 width=265)\"\n\" Index Cond: (credit_id = 3102309)\"\n\" Filter: ((arrear_import_id = $0) AND (arrear_flag_id = 2))\"\n\" InitPlan 1 (returns $0)\"\n\" -> Limit (cost=0.00..11.05 rows=1 width=8)\"\n\" -> Nested Loop (cost=0.00..54731485.84 rows=4953899 width=8)\"\n\" Join Filter: (sa.arrear_import_id = ai.id)\"\n\" -> Index Scan Backward using report_date_bank_id_key\non arrear_import ai (cost=0.00..62.81 rows=469 width=8)\"\n\" Filter: (import_type_id = 1)\"\n*\" -> Materialize (cost=0.00..447641.42 rows=6126357\nwidth=4)\"**\n**\" -> Seq Scan on sygma_arrear sa\n(cost=0.00..393077.64 rows=6126357 width=4)\"**\n**\" Filter: (arrear_flag_id = 2)\"**\n*\nSeq scan... slooow.\n\nWhy that's happens? All configurations are identical. Only engine is\ndifferent.\n\nWhen I make index on to column: (arrear_import_id,arrear_flag_id) then\nengine use it and run fast.\n\n-- \nAndrzej Zawadzki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 10 Jan 2013 14:32:05 +0100",
"msg_from": "Andrzej Zawadzki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "On Thu, Jan 10, 2013 at 5:32 AM, Andrzej Zawadzki <[email protected]> wrote:\n>\n> Why that's happens? All configurations are identical. Only engine is\n> different.\n\nCould you post explain (analyze, buffers) instead of just explain?\nAlso, if you temporarily set enable_seqscan=off on 9.2, what plan do\nyou then get?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 10 Jan 2013 10:17:21 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "On Thu, Jan 10, 2013 at 11:32 AM, Andrzej Zawadzki <[email protected]> wrote:\n\n> Hi!\n>\n> Small query run on 9.0 very fast:\n>\n> SELECT * from sygma_arrear sar where sar.arrear_import_id = (\n> select sa.arrear_import_id from sygma_arrear sa, arrear_import ai\n> where sa.arrear_flag_id = 2\n> AND sa.arrear_import_id = ai.id\n> AND ai.import_type_id = 1\n> order by report_date desc limit 1)\n> AND sar.arrear_flag_id = 2\n> AND sar.credit_id = 3102309\n>\n> \"Index Scan using sygma_arrear_credit_id on sygma_arrear sar\n> (cost=0.66..362.03 rows=1 width=265)\"\n> \" Index Cond: (credit_id = 3102309)\"\n> \" Filter: ((arrear_import_id = $0) AND (arrear_flag_id = 2))\"\n> \" InitPlan 1 (returns $0)\"\n> \" -> Limit (cost=0.00..0.66 rows=1 width=8)\"\n> \" -> Nested Loop (cost=0.00..3270923.14 rows=4930923 width=8)\"\n> \" -> Index Scan Backward using report_date_bank_id_key\n> on arrear_import ai (cost=0.00..936.87 rows=444 width=8)\"\n> \" Filter: (import_type_id = 1)\"\n> *\" -> Index Scan using sygma_arrear_arrear_import_id_idx\n> on sygma_arrear sa (cost=0.00..6971.15 rows=31495 width=4)\"**\n> **\" Index Cond: (sa.arrear_import_id = ai.id)\"**\n> **\" Filter: (sa.arrear_flag_id = 2)\"**\n> *\n> Engine uses index - great.\n>\n> On 9.2\n>\n> \"Index Scan using sygma_arrear_credit_id on sygma_arrear sar\n> (cost=11.05..381.12 rows=1 width=265)\"\n> \" Index Cond: (credit_id = 3102309)\"\n> \" Filter: ((arrear_import_id = $0) AND (arrear_flag_id = 2))\"\n> \" InitPlan 1 (returns $0)\"\n> \" -> Limit (cost=0.00..11.05 rows=1 width=8)\"\n> \" -> Nested Loop (cost=0.00..54731485.84 rows=4953899 width=8)\"\n> \" Join Filter: (sa.arrear_import_id = ai.id)\"\n> \" -> Index Scan Backward using report_date_bank_id_key\n> on arrear_import ai (cost=0.00..62.81 rows=469 width=8)\"\n> \" Filter: (import_type_id = 1)\"\n> *\" -> Materialize (cost=0.00..447641.42 rows=6126357\n> width=4)\"**\n> **\" -> Seq Scan on sygma_arrear sa\n> (cost=0.00..393077.64 rows=6126357 width=4)\"**\n> **\" Filter: (arrear_flag_id = 2)\"**\n> *\n> Seq scan... slooow.\n>\n> Why that's happens? All configurations are identical. Only engine is\n> different.\n>\n>\n\nHow did you do the upgrade?\nHave you tried to run a VACUUM ANALYZE on sygma_arrear?\n\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Thu, Jan 10, 2013 at 11:32 AM, Andrzej Zawadzki <[email protected]> wrote:\n\nHi!\n\nSmall query run on 9.0 very fast:\n\nSELECT * from sygma_arrear sar where sar.arrear_import_id = (\n select sa.arrear_import_id from sygma_arrear sa, arrear_import ai\n where sa.arrear_flag_id = 2\n AND sa.arrear_import_id = ai.id\n AND ai.import_type_id = 1\n order by report_date desc limit 1)\n AND sar.arrear_flag_id = 2\n AND sar.credit_id = 3102309\n\n\"Index Scan using sygma_arrear_credit_id on sygma_arrear sar\n(cost=0.66..362.03 rows=1 width=265)\"\n\" Index Cond: (credit_id = 3102309)\"\n\" Filter: ((arrear_import_id = $0) AND (arrear_flag_id = 2))\"\n\" InitPlan 1 (returns $0)\"\n\" -> Limit (cost=0.00..0.66 rows=1 width=8)\"\n\" -> Nested Loop (cost=0.00..3270923.14 rows=4930923 width=8)\"\n\" -> Index Scan Backward using report_date_bank_id_key\non arrear_import ai (cost=0.00..936.87 rows=444 width=8)\"\n\" Filter: (import_type_id = 1)\"\n*\" -> Index Scan using sygma_arrear_arrear_import_id_idx\non sygma_arrear sa (cost=0.00..6971.15 rows=31495 width=4)\"**\n**\" Index Cond: (sa.arrear_import_id = ai.id)\"**\n**\" Filter: (sa.arrear_flag_id = 2)\"**\n*\nEngine uses index - great.\n\nOn 9.2\n\n\"Index Scan using sygma_arrear_credit_id on sygma_arrear sar\n(cost=11.05..381.12 rows=1 width=265)\"\n\" Index Cond: (credit_id = 3102309)\"\n\" Filter: ((arrear_import_id = $0) AND (arrear_flag_id = 2))\"\n\" InitPlan 1 (returns $0)\"\n\" -> Limit (cost=0.00..11.05 rows=1 width=8)\"\n\" -> Nested Loop (cost=0.00..54731485.84 rows=4953899 width=8)\"\n\" Join Filter: (sa.arrear_import_id = ai.id)\"\n\" -> Index Scan Backward using report_date_bank_id_key\non arrear_import ai (cost=0.00..62.81 rows=469 width=8)\"\n\" Filter: (import_type_id = 1)\"\n*\" -> Materialize (cost=0.00..447641.42 rows=6126357\nwidth=4)\"**\n**\" -> Seq Scan on sygma_arrear sa\n(cost=0.00..393077.64 rows=6126357 width=4)\"**\n**\" Filter: (arrear_flag_id = 2)\"**\n*\nSeq scan... slooow.\n\nWhy that's happens? All configurations are identical. Only engine is\ndifferent.\nHow did you do the upgrade?Have you tried to run a VACUUM ANALYZE on sygma_arrear?Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres",
"msg_date": "Thu, 10 Jan 2013 16:48:30 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "On 10.01.2013 19:17, Jeff Janes wrote:\n> On Thu, Jan 10, 2013 at 5:32 AM, Andrzej Zawadzki <[email protected]> wrote:\n>> Why that's happens? All configurations are identical. Only engine is\n>> different.\n> Could you post explain (analyze, buffers) instead of just explain?\nImpossible, 1h of waiting and I've killed that.\n> Also, if you temporarily set enable_seqscan=off on 9.2, what plan do\n> you then get?\n\nPlan is different.\n\n\"Index Scan using sygma_arrear_credit_id on sygma_arrear sar \n(cost=11.07..390.66 rows=1 width=265)\"\n\" Index Cond: (credit_id = 3102309)\"\n\" Filter: ((arrear_import_id = $0) AND (arrear_flag_id = 2))\"\n\" InitPlan 1 (returns $0)\"\n\" -> Limit (cost=0.00..11.07 rows=1 width=8)\"\n\" -> Nested Loop (cost=0.00..54961299.49 rows=4963314 width=8)\"\n\" Join Filter: (sa.arrear_import_id = ai.id)\"\n\" -> Index Scan Backward using report_date_bank_id_key\non arrear_import ai (cost=0.00..62.81 rows=469 width=8)\"\n\" Filter: (import_type_id = 1)\"\n\" -> Materialize (cost=0.00..574515.68 rows=6138000\nwidth=4)\"\n\" -> Index Scan using\nsygma_arrear_arrear_import_id_idx on sygma_arrear sa \n(cost=0.00..519848.68 rows=6138000 width=4)\"\n\" Filter: (arrear_flag_id = 2)\"\n\nThe real query is still slow.\n\n-- \nAndrzej Zawadzki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jan 2013 09:13:37 +0100",
"msg_from": "Andrzej Zawadzki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "On 10.01.2013 19:48, Matheus de Oliveira wrote:\n>\n>\n> On Thu, Jan 10, 2013 at 11:32 AM, Andrzej Zawadzki <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> Hi!\n>\n> Small query run on 9.0 very fast:\n>\n> SELECT * from sygma_arrear sar where sar.arrear_import_id = (\n> select sa.arrear_import_id from sygma_arrear sa,\n> arrear_import ai\n> where sa.arrear_flag_id = 2\n> AND sa.arrear_import_id = ai.id <http://ai.id>\n> AND ai.import_type_id = 1\n> order by report_date desc limit 1)\n> AND sar.arrear_flag_id = 2\n> AND sar.credit_id = 3102309 <tel:3102309>\n>\n> \"Index Scan using sygma_arrear_credit_id on sygma_arrear sar\n> (cost=0.66..362.03 rows=1 width=265)\"\n> \" Index Cond: (credit_id = 3102309 <tel:3102309>)\"\n> \" Filter: ((arrear_import_id = $0) AND (arrear_flag_id = 2))\"\n> \" InitPlan 1 (returns $0)\"\n> \" -> Limit (cost=0.00..0.66 rows=1 width=8)\"\n> \" -> Nested Loop (cost=0.00..3270923.14 rows=4930923\n> width=8)\"\n> \" -> Index Scan Backward using report_date_bank_id_key\n> on arrear_import ai (cost=0.00..936.87 rows=444 width=8)\"\n> \" Filter: (import_type_id = 1)\"\n> *\" -> Index Scan using\n> sygma_arrear_arrear_import_id_idx\n> on sygma_arrear sa (cost=0.00..6971.15 rows=31495 width=4)\"**\n> **\" Index Cond: (sa.arrear_import_id = ai.id\n> <http://ai.id>)\"**\n> **\" Filter: (sa.arrear_flag_id = 2)\"**\n> *\n> Engine uses index - great.\n>\n> On 9.2\n>\n> \"Index Scan using sygma_arrear_credit_id on sygma_arrear sar\n> (cost=11.05..381.12 rows=1 width=265)\"\n> \" Index Cond: (credit_id = 3102309 <tel:3102309>)\"\n> \" Filter: ((arrear_import_id = $0) AND (arrear_flag_id = 2))\"\n> \" InitPlan 1 (returns $0)\"\n> \" -> Limit (cost=0.00..11.05 rows=1 width=8)\"\n> \" -> Nested Loop (cost=0.00..54731485.84 rows=4953899\n> width=8)\"\n> \" Join Filter: (sa.arrear_import_id = ai.id\n> <http://ai.id>)\"\n> \" -> Index Scan Backward using report_date_bank_id_key\n> on arrear_import ai (cost=0.00..62.81 rows=469 width=8)\"\n> \" Filter: (import_type_id = 1)\"\n> *\" -> Materialize (cost=0.00..447641.42 rows=6126357\n> width=4)\"**\n> **\" -> Seq Scan on sygma_arrear sa\n> (cost=0.00..393077.64 rows=6126357 width=4)\"**\n> **\" Filter: (arrear_flag_id = 2)\"**\n> *\n> Seq scan... slooow.\n>\n> Why that's happens? All configurations are identical. Only engine is\n> different.\n>\n>\n>\n> How did you do the upgrade?\npg_upgrade and I think that this is source of problem.\nI have test database from dump/restore process and works properly.\n> Have you tried to run a VACUUM ANALYZE on sygma_arrear?\nYes I did - after upgrade all databases was vacuumed.\n\nvacuumdb -azv\n\nI'll try reindex all indexes at weekend\n\n-- \nAndrzej Zawadzki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jan 2013 09:23:01 +0100",
"msg_from": "Andrzej Zawadzki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query after upgrade from 9.0 to 9.2"
},
{
"msg_contents": "On Fri, Jan 11, 2013 at 12:13 AM, Andrzej Zawadzki <[email protected]> wrote:\n> On 10.01.2013 19:17, Jeff Janes wrote:\n\n>> Also, if you temporarily set enable_seqscan=off on 9.2, what plan do\n>> you then get?\n>\n> Plan is different.\n>\n\n> \" Join Filter: (sa.arrear_import_id = ai.id)\"\n\nIt is hard to imagine why it is not using\nsygma_arrear_arrear_import_id_idx for this given the plan is now\naccessing the index anyway. Have the types or encodings or collations\nsomehow become incompatible so that this index can no longer fulfill\nit?\n\nWhat if you just write a very simple join between the two tables with\nthe above join condition, with another highly selective condition on\nai?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jan 2013 09:12:16 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after upgrade from 9.0 to 9.2"
}
] |
[
{
"msg_contents": "My best regards for all...\n\nPlease. I need for an advice.\nI'm having a trouble, that puting others queries in wait state, becouse \nof ExclusiveLock granted by an Update that only update one row at each \ntime. This update occurs into a function and this function are \nexecuted several times and concurrently.\nBelow, query plan (explain):\n\nNested Loop (cost=16.91..36.32 rows=1 width=75)\n -> HashAggregate (cost=16.91..16.92 rows=1 width=4)\n -> Index Scan using unq_customer_idx_msisdn on customer \n(cost=0.00..16.90 rows=1 width=4)\n Index Cond: ((msisdn)::text = '558796013980'::text)\n -> Index Scan using pk_customer_rel_channel on customer_rel_channel \n(cost=0.00..19.39 rows=1 width=75)\n Index Cond: ((customer_rel_channel.id_customer = \ncustomer.id_customer) AND (customer_rel_channel.id_channel = 282))\n\nBut, the pg_locs shows:\n\nPID Relation User Transaction Access Mode Granted Query \nStart Query\n22569 customer_rel_channel postgres ExclusiveLock False \n2013-01-10 15:54:09.308056-02 UPDATE news.customer_rel_channel SET \nstatus = $1, source = $2\n WHERE news.customer_rel_channel.id_channel = $3 AND\n news.customer_rel_channel.id_customer IN\n (SELECT id_customer FROM public.customer WHERE \npublic.customer.msisdn = $4)\n\nI can't understand what happens here... This query can't be lock \ngranted becouse another instance of this query already granted it.\nI can't understand why an update that modify one row only need an \nExclusiveLock.\n\nThanks a lot!!\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 10 Jan 2013 16:01:58 -0200",
"msg_from": "PostgreSQL <[email protected]>",
"msg_from_op": true,
"msg_subject": "Updates on one row causing ExclusiveLock on PostgreSQL 8.3.5"
}
] |
[
{
"msg_contents": "My best regards for all...\n\nPlease. I need for an advice.\nI'm having a trouble, that puting others queries in wait state, becouse \nof ExclusiveLock granted by an Update that only update one row at each \ntime. This update occurs into a function and this function are \nexecuted several times and concurrently.\nBelow, query plan (explain):\n\nNested Loop (cost=16.91..36.32 rows=1 width=75)\n -> HashAggregate (cost=16.91..16.92 rows=1 width=4)\n -> Index Scan using unq_customer_idx_msisdn on customer \n(cost=0.00..16.90 rows=1 width=4)\n Index Cond: ((msisdn)::text = '558796013980'::text)\n -> Index Scan using pk_customer_rel_channel on customer_rel_channel \n(cost=0.00..19.39 rows=1 width=75)\n Index Cond: ((customer_rel_channel.id_customer = \ncustomer.id_customer) AND (customer_rel_channel.id_channel = 282))\n\nBut, the pg_locs shows:\n\nPID Relation User Transaction Access Mode Granted Query \nStart Query\n22569 customer_rel_channel postgres ExclusiveLock False \n2013-01-10 15:54:09.308056-02 UPDATE news.customer_rel_channel SET \nstatus = $1, source = $2\n WHERE news.customer_rel_channel.id_channel = $3 AND\n news.customer_rel_channel.id_customer IN\n (SELECT id_customer FROM public.customer WHERE \npublic.customer.msisdn = $4)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 10 Jan 2013 16:04:02 -0200",
"msg_from": "PostgreSQL <[email protected]>",
"msg_from_op": true,
"msg_subject": "Updates on one row causing ExclusiveLock on PostgreSQL 8.3.5"
}
] |
[
{
"msg_contents": "Hi All,\n\nInspired by Charles' thread and the work of Emmanuel [1], I have made some\nexperiments trying to create a trigger to make partitioning using C\nlanguage.\n\nThe first attempt was not good, I tried to use SPI [2] to create a query to\ninsert into the correct child table, but it took almost no improvement\ncompared with the PL/pgSQL code.\n\nThen, I used the Emmanuel's code and digged into the PG source code\n(basically at copy.c) to create a trigger function that insert to the\npartitioned table direct (using heap_insert instead of SQL) [3], and the\nimprovement was about 100% (not 4/5 times like got by Emmanuel ). The\nfunction has no other performance trick, like caching the relations or\nsomething like that.\n\nThe function does partition based on month/year, but it's easy to change to\nday/month/year or something else. And, of course, it's not ready for\nproduction, as I'm not sure if it can break things.\n\nThe tests were made using a PL/pgSQL code to insert 1 milion rows, and I\ndon't know if this is a real-life-like test (probably not). And there is a\ntable partitioned by month, with a total of 12 partitions (with the\ninsertions randomly distributed through all 2012).\n\nI put the trigger and the experiments on a repository at GitHub:\n\nhttps://github.com/matheusoliveira/pg_partitioning_tests\n\nI don't know if this is the right list for the topic, and I thought the old\none has to many messages, so I created this one to show this tirgger sample\nand see if someone has a comment about it.\n\nPS: I'd be glad if someone could revise the code to make sure it don't\nbrake in some corner case. I'm made some tests [4], but not sure if they\ncovered everything.\nPS2: It surely will not work on old versions of PostgreSQL, perhaps not\neven 9.1 (not tested).\n\n\n[1] http://archives.postgresql.org/pgsql-hackers/2008-12/msg01221.php and\nhttp://archives.postgresql.org/pgsql-performance/2012-12/msg00189.php\n[2]\nhttps://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/spi/partition_insert_trigger_spi.c\n[3]\nhttps://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/partition_insert_trigger.c\n[4]\nhttps://github.com/matheusoliveira/pg_partitioning_tests/tree/master/test/regress\n\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nHi All,\n\nInspired by Charles' thread and the work of Emmanuel [1], I have made some \nexperiments trying to create a trigger to make partitioning using C \nlanguage.\n\nThe first attempt was not good, I tried to use SPI [2] to create a query\n to insert into the correct child table, but it took almost no \nimprovement compared with the PL/pgSQL code.\n\nThen, I used the Emmanuel's code and digged into the PG source code (basically at copy.c) to \ncreate a trigger function that insert to the partitioned table direct \n(using heap_insert instead of SQL) [3], and the improvement was about \n100% (not 4/5 times like got by Emmanuel ). The function has no other \nperformance trick, like caching the relations or something like that.\n\nThe function does partition based on month/year, but it's easy to change to day/month/year or something else. And, of course, it's not ready for production, as I'm not sure if it can break things.\n\nThe tests were made using a PL/pgSQL code to insert 1 milion rows, and I\n don't know if this is a real-life-like test (probably not). And there \nis a table partitioned by month, with a total of 12 partitions (with the\n insertions randomly distributed through all 2012).\n\nI put the trigger and the experiments on a repository at GitHub:\n\nhttps://github.com/matheusoliveira/pg_partitioning_tests\nI don't know if this is the right list for the topic, and I thought the old one has to many messages, so I created this one to show this tirgger sample and see if someone has a comment about it.\nPS: I'd be glad if someone could revise the code to make sure it don't brake in some\n corner case. I'm made some tests [4], but not sure if they \ncovered everything.PS2: It surely will not work on old versions of PostgreSQL, perhaps not even 9.1 (not tested).\n\n[1] http://archives.postgresql.org/pgsql-hackers/2008-12/msg01221.php and http://archives.postgresql.org/pgsql-performance/2012-12/msg00189.php\n\n[2] https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/spi/partition_insert_trigger_spi.c\n\n\n[3] https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/partition_insert_trigger.c\n\n\n[4] https://github.com/matheusoliveira/pg_partitioning_tests/tree/master/test/regress\n\n\nRegards,\n-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Thu, 10 Jan 2013 16:45:43 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partition insert trigger using C language"
},
{
"msg_contents": "On 10.01.2013 20:45, Matheus de Oliveira wrote:\n> Inspired by Charles' thread and the work of Emmanuel [1], I have made some\n> experiments trying to create a trigger to make partitioning using C\n> language.\n>\n> The first attempt was not good, I tried to use SPI [2] to create a query to\n> insert into the correct child table, but it took almost no improvement\n> compared with the PL/pgSQL code.\n\nThe right way to do this with SPI is to prepare each insert-statement on \nfirst invocation (SPI_prepare + SPI_keepplan), and reuse the plan after \nthat (SPI_execute_with_args).\n\nIf you construct and plan the query on every invocation, it's not \nsurprising that it's no different from PL/pgSQL performance.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 10 Jan 2013 20:54:57 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "On Thu, Jan 10, 2013 at 4:54 PM, Heikki Linnakangas <[email protected]\n> wrote:\n\n> On 10.01.2013 20:45, Matheus de Oliveira wrote:\n>\n>> Inspired by Charles' thread and the work of Emmanuel [1], I have made some\n>> experiments trying to create a trigger to make partitioning using C\n>> language.\n>>\n>> The first attempt was not good, I tried to use SPI [2] to create a query\n>> to\n>> insert into the correct child table, but it took almost no improvement\n>> compared with the PL/pgSQL code.\n>>\n>\n> The right way to do this with SPI is to prepare each insert-statement on\n> first invocation (SPI_prepare + SPI_keepplan), and reuse the plan after\n> that (SPI_execute_with_args).\n>\n> If you construct and plan the query on every invocation, it's not\n> surprising that it's no different from PL/pgSQL performance.\n>\n>\nYeah. I thought about that, but the problem was that I assumed the INSERTs\ncame with random date, so in the worst scenario I would have to keep the\nplans of all of the child partitions. Am I wrong?\n\nBut thinking better, even with hundreds of partitions, it wouldn't use to\nmuch memory/resource, would it?\n\nIn fact, I didn't give to much attention to SPI method, because the other\none is where we can have more fun, =P.\n\nAnyway, I'll change the code (maybe now), and see if it gets closer to the\nother method (that uses heap_insert), and will post back the results here.\n\nThanks,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Thu, Jan 10, 2013 at 4:54 PM, Heikki Linnakangas <[email protected]> wrote:\nOn 10.01.2013 20:45, Matheus de Oliveira wrote:\n\nInspired by Charles' thread and the work of Emmanuel [1], I have made some\nexperiments trying to create a trigger to make partitioning using C\nlanguage.\n\nThe first attempt was not good, I tried to use SPI [2] to create a query to\ninsert into the correct child table, but it took almost no improvement\ncompared with the PL/pgSQL code.\n\n\nThe right way to do this with SPI is to prepare each insert-statement on first invocation (SPI_prepare + SPI_keepplan), and reuse the plan after that (SPI_execute_with_args).\n\nIf you construct and plan the query on every invocation, it's not surprising that it's no different from PL/pgSQL performance.\nYeah. I thought about that, but the problem was that I assumed the INSERTs came with random date, so in the worst scenario I would have to keep the plans of all of the child partitions. Am I wrong?\nBut thinking better, even with hundreds of partitions, it wouldn't use to much memory/resource, would it?In fact, I didn't give to much attention to SPI method, because the other one is where we can have more fun, =P.\nAnyway, I'll change the code (maybe now), and see if it gets closer to the other method (that uses heap_insert), and will post back the results here.Thanks,-- Matheus de Oliveira\n\nAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Thu, 10 Jan 2013 17:11:48 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "2013/1/10 Heikki Linnakangas <[email protected]>:\n> On 10.01.2013 20:45, Matheus de Oliveira wrote:\n>>\n>> Inspired by Charles' thread and the work of Emmanuel [1], I have made some\n>> experiments trying to create a trigger to make partitioning using C\n>> language.\n>>\n>> The first attempt was not good, I tried to use SPI [2] to create a query\n>> to\n>> insert into the correct child table, but it took almost no improvement\n>> compared with the PL/pgSQL code.\n>\n>\n> The right way to do this with SPI is to prepare each insert-statement on\n> first invocation (SPI_prepare + SPI_keepplan), and reuse the plan after that\n> (SPI_execute_with_args).\n>\n> If you construct and plan the query on every invocation, it's not surprising\n> that it's no different from PL/pgSQL performance.\n\nThis a problematic for partitioning, because you need too much plans -\nand direct access is probably better - I am thinking.\n\nOn second hand, there is relative high possibility to get inconsistent\nrelations - broken indexes, if somebody don't write trigger well.\n\nMaybe we can enhance copy to support partitioning better. Now I have a\nprototype for fault tolerant copy and it can work nice together with\nsome partitioning support\n\nRegards\n\nPavel\n\n>\n> - Heikki\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 10 Jan 2013 20:14:58 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "On 10.01.2013 21:11, Matheus de Oliveira wrote:\n> On Thu, Jan 10, 2013 at 4:54 PM, Heikki Linnakangas<[email protected]\n>> wrote:\n>\n>> The right way to do this with SPI is to prepare each insert-statement on\n>> first invocation (SPI_prepare + SPI_keepplan), and reuse the plan after\n>> that (SPI_execute_with_args).\n>>\n>> If you construct and plan the query on every invocation, it's not\n>> surprising that it's no different from PL/pgSQL performance.\n>\n> Yeah. I thought about that, but the problem was that I assumed the INSERTs\n> came with random date, so in the worst scenario I would have to keep the\n> plans of all of the child partitions. Am I wrong?\n>\n> But thinking better, even with hundreds of partitions, it wouldn't use to\n> much memory/resource, would it?\n\nRight, a few hundred saved plans would probably still be ok. And if that \never becomes a problem, you could keep the plans in a LRU list and only \nkeep the last 100 plans or so.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 10 Jan 2013 21:22:25 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "On Thu, Jan 10, 2013 at 5:22 PM, Heikki Linnakangas <[email protected]\n> wrote:\n\n> On 10.01.2013 21:11, Matheus de Oliveira wrote:\n>\n>> On Thu, Jan 10, 2013 at 4:54 PM, Heikki Linnakangas<hlinnakangas@**\n>> vmware.com <[email protected]>\n>>\n>>> wrote:\n>>>\n>>\n>> The right way to do this with SPI is to prepare each insert-statement on\n>>> first invocation (SPI_prepare + SPI_keepplan), and reuse the plan after\n>>> that (SPI_execute_with_args).\n>>>\n>>> If you construct and plan the query on every invocation, it's not\n>>> surprising that it's no different from PL/pgSQL performance.\n>>>\n>>\n>> Yeah. I thought about that, but the problem was that I assumed the INSERTs\n>> came with random date, so in the worst scenario I would have to keep the\n>> plans of all of the child partitions. Am I wrong?\n>>\n>> But thinking better, even with hundreds of partitions, it wouldn't use to\n>> much memory/resource, would it?\n>>\n>\n> Right, a few hundred saved plans would probably still be ok. And if that\n> ever becomes a problem, you could keep the plans in a LRU list and only\n> keep the last 100 plans or so.\n>\n>\nI have made a small modification to keep the plans, and it got from\n33957.768ms to 43782.376ms. I'm not sure if I did something wrong/stupid on\nthe code [1], or if something else broke my test. I can't rerun the test\ntoday, but I'll do that as soon as I have time.\n\n[1]\nhttps://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/spi/partition_insert_trigger_spi.c\n\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Thu, Jan 10, 2013 at 5:22 PM, Heikki Linnakangas <[email protected]> wrote:\nOn 10.01.2013 21:11, Matheus de Oliveira wrote:\n\nOn Thu, Jan 10, 2013 at 4:54 PM, Heikki Linnakangas<[email protected]\n\nwrote:\n\n\n\nThe right way to do this with SPI is to prepare each insert-statement on\nfirst invocation (SPI_prepare + SPI_keepplan), and reuse the plan after\nthat (SPI_execute_with_args).\n\nIf you construct and plan the query on every invocation, it's not\nsurprising that it's no different from PL/pgSQL performance.\n\n\nYeah. I thought about that, but the problem was that I assumed the INSERTs\ncame with random date, so in the worst scenario I would have to keep the\nplans of all of the child partitions. Am I wrong?\n\nBut thinking better, even with hundreds of partitions, it wouldn't use to\nmuch memory/resource, would it?\n\n\nRight, a few hundred saved plans would probably still be ok. And if that ever becomes a problem, you could keep the plans in a LRU list and only keep the last 100 plans or so.\nI have made a small modification to keep the plans, and it got from 33957.768ms to 43782.376ms. I'm not sure if I did something wrong/stupid on the code [1], or if something else broke my test. I can't rerun the test today, but I'll do that as soon as I have time.\n[1] https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/spi/partition_insert_trigger_spi.c\n-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Thu, 10 Jan 2013 17:48:15 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "________________________________\n> From: [email protected] \n> Date: Thu, 10 Jan 2013 16:45:43 -0200 \n> Subject: Partition insert trigger using C language \n> To: [email protected] \n> CC: [email protected] \n> \n> Hi All, \n> \n> Inspired by Charles' thread and the work of Emmanuel [1], I have made \n> some experiments trying to create a trigger to make partitioning using \n> C language. \n> \n> The first attempt was not good, I tried to use SPI [2] to create a \n> query to insert into the correct child table, but it took almost no \n> improvement compared with the PL/pgSQL code. \n> \n> Then, I used the Emmanuel's code and digged into the PG source code \n> (basically at copy.c) to create a trigger function that insert to the \n> partitioned table direct (using heap_insert instead of SQL) [3], and \n> the improvement was about 100% (not 4/5 times like got by Emmanuel ). \n> The function has no other performance trick, like caching the relations \n> or something like that. \n> \n> The function does partition based on month/year, but it's easy to \n> change to day/month/year or something else. And, of course, it's not \n> ready for production, as I'm not sure if it can break things. \n> \n> The tests were made using a PL/pgSQL code to insert 1 milion rows, and \n> I don't know if this is a real-life-like test (probably not). And there \n> is a table partitioned by month, with a total of 12 partitions (with \n> the insertions randomly distributed through all 2012). \n> \n> I put the trigger and the experiments on a repository at GitHub: \n> \n> https://github.com/matheusoliveira/pg_partitioning_tests \n> \n> I don't know if this is the right list for the topic, and I thought the \n> old one has to many messages, so I created this one to show this \n> tirgger sample and see if someone has a comment about it. \n> \n> PS: I'd be glad if someone could revise the code to make sure it don't \n> brake in some corner case. I'm made some tests [4], but not sure if \n> they covered everything. \n> PS2: It surely will not work on old versions of PostgreSQL, perhaps not \n> even 9.1 (not tested). \n> \n> \n> [1] http://archives.postgresql.org/pgsql-hackers/2008-12/msg01221.php \n> and \n> http://archives.postgresql.org/pgsql-performance/2012-12/msg00189.php \n> [2] \n> https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/spi/partition_insert_trigger_spi.c \n> [3] \n> https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/partition_insert_trigger.c \n> [4] \n> https://github.com/matheusoliveira/pg_partitioning_tests/tree/master/test/regress \n> \n> \n> Regards, \n> -- \n> Matheus de Oliveira \n> Analista de Banco de Dados \n> Dextra Sistemas - MPS.Br nível F! \n> www.dextra.com.br/postgres<http://www.dextra.com.br/postgres/> \n\n\nInteresting that you got an improvement. In my case I get\nalmost no improvement at all:\n\n \n\n\n \n \n PL/SQL – Dynamic Trigger\n \n \n 4:15:54\n \n \n \n \n PL/SQL - CASE / WHEN Statements\n \n \n 4:12:29\n \n \n \n \n PL/SQL\n - IF Statements\n \n \n 4:12:39\n \n \n \n \n C\n Trigger\n \n \n 4:10:49\n \n \n\n\n \n\nHere is my code, I’m using heap insert and updating the indexes.\nWith a similar approach of yours.\n\nThe trigger is aware of \n\nhttp://www.charlesrg.com/~charles/pgsql/partition2.c \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 10 Jan 2013 14:57:47 -0500",
"msg_from": "Charles Gomes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "On 10.01.2013 21:48, Matheus de Oliveira wrote:\n> I have made a small modification to keep the plans, and it got from\n> 33957.768ms to 43782.376ms.\n\nIf I'm reading results.txt correctly, the avg runtimes are:\n\nC and SPI_execute_with_args: 58567.708 ms\nC and SPI_(prepare/keepplan/execute_plan): 43782.376 ms\nC and heap_insert: 33957.768 ms\n\nSo switching to prepared plans helped quite a lot, but it's still slower \nthan direct heap_inserts.\n\nOne thing that caught my eye:\n\n> CREATE OR REPLACE FUNCTION partition_insert_trigger_spi()\n> RETURNS trigger\n> LANGUAGE C\n> VOLATILE STRICT\n> AS 'partition_insert_trigger_spi','partition_insert_trigger_spi'\n> SET DateStyle TO 'ISO';\n\nCalling a function with SET options has a fair amount of overhead, to \nset/restore the GUC on every invocation. That should be avoided in a \nperformance critical function like this.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jan 2013 12:19:55 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "On Fri, Jan 11, 2013 at 8:19 AM, Heikki Linnakangas <[email protected]\n> wrote:\n\n> On 10.01.2013 21:48, Matheus de Oliveira wrote:\n>\n>> I have made a small modification to keep the plans, and it got from\n>> 33957.768ms to 43782.376ms.\n>>\n>\n> If I'm reading results.txt correctly, the avg runtimes are:\n>\n> C and SPI_execute_with_args: 58567.708 ms\n> C and SPI_(prepare/keepplan/execute_**plan): 43782.376 ms\n> C and heap_insert: 33957.768 ms\n>\n> So switching to prepared plans helped quite a lot, but it's still slower\n> than direct heap_inserts.\n>\n>\nHumm... You are right, I misread what it before, sorry. The 33957.768ms was\nwith heap_insert.\n\n\n\n> One thing that caught my eye:\n>\n> CREATE OR REPLACE FUNCTION partition_insert_trigger_spi()\n>> RETURNS trigger\n>> LANGUAGE C\n>> VOLATILE STRICT\n>> AS 'partition_insert_trigger_spi'**,'partition_insert_trigger_**spi'\n>> SET DateStyle TO 'ISO';\n>>\n>\n> Calling a function with SET options has a fair amount of overhead, to\n> set/restore the GUC on every invocation. That should be avoided in a\n> performance critical function like this.\n>\n>\nI (stupidly) used SPI_getvalue [1] and expected it to always return as\nYYYY-MM-DD, but them I remembered it would do that only with DateStyle=ISO.\n\nBut the truth is that I couldn't see any overhead, because the function was\nwithout that on my first tests, and after that I saw no difference on the\ntests. I think I should use SPI_getbinvalue instead, but I don't know how\nto parse the result to get year and month, any help on that?\n\n[1]\nhttps://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/spi/partition_insert_trigger_spi.c#L103\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Fri, Jan 11, 2013 at 8:19 AM, Heikki Linnakangas <[email protected]> wrote:\nOn 10.01.2013 21:48, Matheus de Oliveira wrote:\n\nI have made a small modification to keep the plans, and it got from\n33957.768ms to 43782.376ms.\n\n\nIf I'm reading results.txt correctly, the avg runtimes are:\n\nC and SPI_execute_with_args: 58567.708 ms\nC and SPI_(prepare/keepplan/execute_plan): 43782.376 ms\nC and heap_insert: 33957.768 ms\n\nSo switching to prepared plans helped quite a lot, but it's still slower than direct heap_inserts.\nHumm... You are right, I misread what it before, sorry. The 33957.768ms was with heap_insert. \n\n\nOne thing that caught my eye:\n\n\nCREATE OR REPLACE FUNCTION partition_insert_trigger_spi()\nRETURNS trigger\nLANGUAGE C\nVOLATILE STRICT\nAS 'partition_insert_trigger_spi','partition_insert_trigger_spi'\nSET DateStyle TO 'ISO';\n\n\nCalling a function with SET options has a fair amount of overhead, to set/restore the GUC on every invocation. That should be avoided in a performance critical function like this.\nI (stupidly) used SPI_getvalue [1] and expected it to always return as YYYY-MM-DD, but them I remembered it would do that only with DateStyle=ISO.But the truth is that I couldn't see any overhead, because the function was without that on my first tests, and after that I saw no difference on the tests. I think I should use SPI_getbinvalue instead, but I don't know how to parse the result to get year and month, any help on that?\n[1] https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/spi/partition_insert_trigger_spi.c#L103\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 11 Jan 2013 08:36:16 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "On Thu, Jan 10, 2013 at 5:51 PM, Charles Gomes <\[email protected]> wrote:\n\n> ** **\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Matheus de Oliveira\n> *Sent:* Thursday, January 10, 2013 2:12 PM\n> *To:* Heikki Linnakangas\n> *Cc:* pgsql-performance; Charles Gomes\n> *Subject:* Re: [PERFORM] Partition insert trigger using C language****\n>\n> ** **\n>\n> ** **\n>\n> On Thu, Jan 10, 2013 at 4:54 PM, Heikki Linnakangas <\n> [email protected]> wrote:****\n>\n>> On 10.01.2013 20:45, Matheus de Oliveira wrote:\n>>\n>> Inspired by Charles' thread and the work of Emmanuel [1], I have made some\n>> experiments trying to create a trigger to make partitioning using C\n>> language.\n>>\n>> The first attempt was not good, I tried to use SPI [2] to create a query\n>> to\n>> insert into the correct child table, but it took almost no improvement\n>> compared with the PL/pgSQL code.\n>>\n>>\n>>\n>> The right way to do this with SPI is to prepare each insert-statement on\n>> first invocation (SPI_prepare + SPI_keepplan), and reuse the plan after\n>> that (SPI_execute_with_args).\n>>\n>> If you construct and plan the query on every invocation, it's not\n>> surprising that it's no different from PL/pgSQL performance.\n>>\n>>\n>> Yeah. I thought about that, but the problem was that I assumed the\n>> INSERTs came with random date, so in the worst scenario I would have to\n>> keep the plans of all of the child partitions. Am I wrong?\n>>\n>> But thinking better, even with hundreds of partitions, it wouldn't use to\n>> much memory/resource, would it?\n>>\n>> In fact, I didn't give to much attention to SPI method, because the other\n>> one is where we can have more fun, =P.\n>>\n>> Anyway, I'll change the code (maybe now), and see if it gets closer to\n>> the other method (that uses heap_insert), and will post back the results\n>> here.\n>>\n>\n>>\n>\n>\n ****\n>\n> Interesting that you got an improvement. In my case I get almost no\n> improvement at all:****\n>\n> ** **\n>\n> PL/SQL – Dynamic Trigger****\n>\n> 4:15:54****\n>\n> PL/SQL - CASE / WHEN Statements****\n>\n> 4:12:29****\n>\n> PL/SQL - IF Statements****\n>\n> 4:12:39****\n>\n> C Trigger****\n>\n> 4:10:49****\n>\n> ** **\n>\n> Here is my code, I’m using heap insert and updating the indexes. With a\n> similar approach of yours.****\n>\n> The trigger is aware of ****\n>\n> http://www.charlesrg.com/~charles/pgsql/partition2.c****\n>\n> **\n>\n\nHumm... Looking at your code, I saw no big difference from mine. The only\nthing I saw is that you don't fire triggers, but it would be even faster\nthis way. Another thing that could cause that is the number of partitions,\nI tried only with 12.\n\nCould you make a test suite? Or try to run with my function in your\nscenario? It would be easy to make it get the partitions by day [1].\n\n[1] https://gist.github.com/4509782\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Thu, Jan 10, 2013 at 5:51 PM, Charles Gomes <[email protected]> wrote:\n\n\n\n \nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Matheus de Oliveira\nSent: Thursday, January 10, 2013 2:12 PM\nTo: Heikki Linnakangas\nCc: pgsql-performance; Charles Gomes\nSubject: Re: [PERFORM] Partition insert trigger using C language\n \n \n\nOn Thu, Jan 10, 2013 at 4:54 PM, Heikki Linnakangas <[email protected]> wrote:\n\nOn 10.01.2013 20:45, Matheus de Oliveira wrote:\nInspired by Charles' thread and the work of Emmanuel [1], I have made some\nexperiments trying to create a trigger to make partitioning using C\nlanguage.\n\nThe first attempt was not good, I tried to use SPI [2] to create a query to\ninsert into the correct child table, but it took almost no improvement\ncompared with the PL/pgSQL code.\n \nThe right way to do this with SPI is to prepare each insert-statement on first invocation (SPI_prepare + SPI_keepplan), and reuse the plan after that (SPI_execute_with_args).\n\nIf you construct and plan the query on every invocation, it's not surprising that it's no different from PL/pgSQL performance.\n\nYeah. I thought about that, but the problem was that I assumed the INSERTs came with random date, so in the worst scenario I would have to keep the plans of all of the child partitions. Am I wrong?\n\nBut thinking better, even with hundreds of partitions, it wouldn't use to much memory/resource, would it?\n\nIn fact, I didn't give to much attention to SPI method, because the other one is where we can have more fun, =P.\n\nAnyway, I'll change the code (maybe now), and see if it gets closer to the other method (that uses heap_insert), and will post back the results here.\n\n \n\n\n\n\nInteresting that you got an improvement. In my case I get almost no improvement at all:\n \n\n\n\n\nPL/SQL – Dynamic Trigger\n\n\n 4:15:54\n\n\n\n\nPL/SQL - CASE / WHEN Statements\n\n\n 4:12:29\n\n\n\n\nPL/SQL - IF Statements\n\n\n 4:12:39\n\n\n\n\nC Trigger\n\n\n 4:10:49\n\n\n\n\n \nHere is my code, I’m using heap insert and updating the indexes. With a similar approach of yours.\nThe trigger is aware of\n\nhttp://www.charlesrg.com/~charles/pgsql/partition2.c\n Humm... Looking at your code, I saw no big difference from mine. The only thing I saw is that you don't fire triggers, but it would be even faster this way. Another thing that could cause that is the number of partitions, I tried only with 12.\nCould you make a test suite? Or try to run with my function in your scenario? It would be easy to make it get the partitions by day [1].[1] https://gist.github.com/4509782\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 11 Jan 2013 08:57:15 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "On 11.01.2013 12:36, Matheus de Oliveira wrote:\n> On Fri, Jan 11, 2013 at 8:19 AM, Heikki Linnakangas<[email protected]\n>> wrote:\n>\n>> One thing that caught my eye:\n>>\n>> CREATE OR REPLACE FUNCTION partition_insert_trigger_spi()\n>>> RETURNS trigger\n>>> LANGUAGE C\n>>> VOLATILE STRICT\n>>> AS 'partition_insert_trigger_spi'**,'partition_insert_trigger_**spi'\n>>> SET DateStyle TO 'ISO';\n>>\n>> Calling a function with SET options has a fair amount of overhead, to\n>> set/restore the GUC on every invocation. That should be avoided in a\n>> performance critical function like this.\n>\n> I (stupidly) used SPI_getvalue [1] and expected it to always return as\n> YYYY-MM-DD, but them I remembered it would do that only with DateStyle=ISO.\n>\n> But the truth is that I couldn't see any overhead, because the function was\n> without that on my first tests, and after that I saw no difference on the\n> tests.\n\nOh, ok then. I would've expected it to make a measurable difference.\n\n> I think I should use SPI_getbinvalue instead, but I don't know how\n> to parse the result to get year and month, any help on that?\n\nThe fastest way is probably to use j2date like date_out does:\n\n DateADT\t\tdate = DatumGetDateADT(x)\n int year, month, mday;\n\n if (DATE_NOT_FINITE(date))\n\telog(ERROR, \"date must be finite\");\n j2date(date + POSTGRES_EPOCH_JDATE, &year, &month, &mday);\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jan 2013 13:02:29 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "On Fri, Jan 11, 2013 at 9:02 AM, Heikki Linnakangas <[email protected]\n> wrote:\n\n> On 11.01.2013 12:36, Matheus de Oliveira wrote:\n>\n>> On Fri, Jan 11, 2013 at 8:19 AM, Heikki Linnakangas<hlinnakangas@**\n>> vmware.com <[email protected]>\n>>\n>>> wrote:\n>>>\n>>\n>> One thing that caught my eye:\n>>>\n>>> CREATE OR REPLACE FUNCTION partition_insert_trigger_spi()\n>>>\n>>>> RETURNS trigger\n>>>> LANGUAGE C\n>>>> VOLATILE STRICT\n>>>> AS 'partition_insert_trigger_spi'****,'partition_insert_trigger_***\n>>>> *spi'\n>>>> SET DateStyle TO 'ISO';\n>>>>\n>>>\n>>> Calling a function with SET options has a fair amount of overhead, to\n>>> set/restore the GUC on every invocation. That should be avoided in a\n>>> performance critical function like this.\n>>>\n>>\n>> I (stupidly) used SPI_getvalue [1] and expected it to always return as\n>> YYYY-MM-DD, but them I remembered it would do that only with\n>> DateStyle=ISO.\n>>\n>> But the truth is that I couldn't see any overhead, because the function\n>> was\n>> without that on my first tests, and after that I saw no difference on the\n>> tests.\n>>\n>\n> Oh, ok then. I would've expected it to make a measurable difference.\n>\n>\n> I think I should use SPI_getbinvalue instead, but I don't know how\n>> to parse the result to get year and month, any help on that?\n>>\n>\n> The fastest way is probably to use j2date like date_out does:\n>\n> DateADT date = DatumGetDateADT(x)\n> int year, month, mday;\n>\n> if (DATE_NOT_FINITE(date))\n> elog(ERROR, \"date must be finite\");\n> j2date(date + POSTGRES_EPOCH_JDATE, &year, &month, &mday);\n>\n> - Heikki\n>\n\nNice. With the modifications you suggested I did saw a good improvement on\nthe function using SPI (and a little one with heap_insert). So I was wrong\nto think that change the GUC would not make to much difference, the SPI\ncode now runs almost as fast as the heap_insert:\n\nheap_insert: 31896.098 ms\nSPI: 36558.564\n\nOf course I still could make some improvements on it, like using a LRU to\nkeep the plans, or something like that.\n\nThe new code is at github.\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Fri, Jan 11, 2013 at 9:02 AM, Heikki Linnakangas <[email protected]> wrote:\nOn 11.01.2013 12:36, Matheus de Oliveira wrote:\n\nOn Fri, Jan 11, 2013 at 8:19 AM, Heikki Linnakangas<[email protected]\n\nwrote:\n\n\n\nOne thing that caught my eye:\n\n CREATE OR REPLACE FUNCTION partition_insert_trigger_spi()\n\nRETURNS trigger\nLANGUAGE C\nVOLATILE STRICT\nAS 'partition_insert_trigger_spi'**,'partition_insert_trigger_**spi'\nSET DateStyle TO 'ISO';\n\n\nCalling a function with SET options has a fair amount of overhead, to\nset/restore the GUC on every invocation. That should be avoided in a\nperformance critical function like this.\n\n\nI (stupidly) used SPI_getvalue [1] and expected it to always return as\nYYYY-MM-DD, but them I remembered it would do that only with DateStyle=ISO.\n\nBut the truth is that I couldn't see any overhead, because the function was\nwithout that on my first tests, and after that I saw no difference on the\ntests.\n\n\nOh, ok then. I would've expected it to make a measurable difference.\n\n\nI think I should use SPI_getbinvalue instead, but I don't know how\nto parse the result to get year and month, any help on that?\n\n\nThe fastest way is probably to use j2date like date_out does:\n\n DateADT date = DatumGetDateADT(x)\n int year, month, mday;\n\n if (DATE_NOT_FINITE(date))\n elog(ERROR, \"date must be finite\");\n j2date(date + POSTGRES_EPOCH_JDATE, &year, &month, &mday);\n\n- Heikki\nNice. With the modifications you suggested I did saw a good improvement on the function using SPI (and a little one with heap_insert). So I was wrong to think that change the GUC would not make to much difference, the SPI code now runs almost as fast as the heap_insert:\nheap_insert: 31896.098 msSPI: 36558.564Of course I still could make some improvements on it, like using a LRU to keep the plans, or something like that.The new code is at github.Regards,\n\n-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 11 Jan 2013 09:55:07 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "Hi Matheus,\nI try to use your partition_insert_trigger_spi.c code for testing SPI\npartitionning.\nBut at execution time the *trigdata->tg_trigger->tgargs* pointer is null.\nDo you know why ?\nThanks a lot\nBest Reagrds\nAli Pouya\n\n\n2013/1/11 Matheus de Oliveira <[email protected]>\n\n>\n> On Fri, Jan 11, 2013 at 8:19 AM, Heikki Linnakangas <\n> [email protected]> wrote:\n>\n>> On 10.01.2013 21:48, Matheus de Oliveira wrote:\n>>\n>>> I have made a small modification to keep the plans, and it got from\n>>> 33957.768ms to 43782.376ms.\n>>>\n>>\n>> If I'm reading results.txt correctly, the avg runtimes are:\n>>\n>> C and SPI_execute_with_args: 58567.708 ms\n>> C and SPI_(prepare/keepplan/execute_**plan): 43782.376 ms\n>> C and heap_insert: 33957.768 ms\n>>\n>> So switching to prepared plans helped quite a lot, but it's still slower\n>> than direct heap_inserts.\n>>\n>>\n> Humm... You are right, I misread what it before, sorry. The 33957.768ms\n> was with heap_insert.\n>\n>\n>\n>> One thing that caught my eye:\n>>\n>> CREATE OR REPLACE FUNCTION partition_insert_trigger_spi()\n>>> RETURNS trigger\n>>> LANGUAGE C\n>>> VOLATILE STRICT\n>>> AS 'partition_insert_trigger_spi'**,'partition_insert_trigger_**spi'\n>>> SET DateStyle TO 'ISO';\n>>>\n>>\n>> Calling a function with SET options has a fair amount of overhead, to\n>> set/restore the GUC on every invocation. That should be avoided in a\n>> performance critical function like this.\n>>\n>>\n> I (stupidly) used SPI_getvalue [1] and expected it to always return as\n> YYYY-MM-DD, but them I remembered it would do that only with DateStyle=ISO.\n>\n> But the truth is that I couldn't see any overhead, because the function\n> was without that on my first tests, and after that I saw no difference on\n> the tests. I think I should use SPI_getbinvalue instead, but I don't know\n> how to parse the result to get year and month, any help on that?\n>\n> [1]\n> https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/spi/partition_insert_trigger_spi.c#L103\n>\n> Regards,\n>\n> --\n> Matheus de Oliveira\n> Analista de Banco de Dados\n> Dextra Sistemas - MPS.Br nível F!\n> www.dextra.com.br/postgres\n>\n>\n\nHi Matheus,I try to use your partition_insert_trigger_spi.c code for testing SPI partitionning.But at execution time the trigdata->tg_trigger->tgargs pointer is null.Do you know why ?Thanks a lot\nBest ReagrdsAli Pouya2013/1/11 Matheus de Oliveira <[email protected]>\nOn Fri, Jan 11, 2013 at 8:19 AM, Heikki Linnakangas <[email protected]> wrote:\n\nOn 10.01.2013 21:48, Matheus de Oliveira wrote:\n\nI have made a small modification to keep the plans, and it got from\n33957.768ms to 43782.376ms.\n\n\nIf I'm reading results.txt correctly, the avg runtimes are:\n\nC and SPI_execute_with_args: 58567.708 ms\nC and SPI_(prepare/keepplan/execute_plan): 43782.376 ms\nC and heap_insert: 33957.768 ms\n\nSo switching to prepared plans helped quite a lot, but it's still slower than direct heap_inserts.\nHumm... You are right, I misread what it before, sorry. The 33957.768ms was with heap_insert. \n\n\n\nOne thing that caught my eye:\n\n\nCREATE OR REPLACE FUNCTION partition_insert_trigger_spi()\nRETURNS trigger\nLANGUAGE C\nVOLATILE STRICT\nAS 'partition_insert_trigger_spi','partition_insert_trigger_spi'\nSET DateStyle TO 'ISO';\n\n\nCalling a function with SET options has a fair amount of overhead, to set/restore the GUC on every invocation. That should be avoided in a performance critical function like this.\nI (stupidly) used SPI_getvalue [1] and expected it to always return as YYYY-MM-DD, but them I remembered it would do that only with DateStyle=ISO.But the truth is that I couldn't see any overhead, because the function was without that on my first tests, and after that I saw no difference on the tests. I think I should use SPI_getbinvalue instead, but I don't know how to parse the result to get year and month, any help on that?\n[1] https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/src/spi/partition_insert_trigger_spi.c#L103\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 11 Feb 2013 16:24:58 +0100",
"msg_from": "Ali Pouya <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "On Mon, Feb 11, 2013 at 1:24 PM, Ali Pouya <[email protected]> wrote:\n\n> Hi Matheus,\n> I try to use your partition_insert_trigger_spi.c code for testing SPI\n> partitionning.\n> But at execution time the *trigdata->tg_trigger->tgargs* pointer is null.\n> Do you know why ?\n> Thanks a lot\n> Best Reagrds\n> Ali Pouya\n>\n>\n>\nHi Ali,\n\nThat is probably because you did not passed a parameter when defined the\ntrigger. You can follow the model at [1]. When creating the trigger, you\nhave to use a string parameter with the name of the field with the date\nvalue used for the partition.\n\nLet me know if you find any other problem.\n\n[1]\nhttps://github.com/matheusoliveira/pg_partitioning_tests/blob/master/test/spi/schema.sql\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Feb 11, 2013 at 1:24 PM, Ali Pouya <[email protected]> wrote:\n\nHi Matheus,I try to use your partition_insert_trigger_spi.c code for testing SPI partitionning.But at execution time the trigdata->tg_trigger->tgargs pointer is null.Do you know why ?Thanks a lot\n\n\nBest ReagrdsAli PouyaHi Ali,That is probably because you did not passed a parameter when defined the trigger. You can follow the model at [1]. When creating the trigger, you have to use a string parameter with the name of the field with the date value used for the partition.\nLet me know if you find any other problem.[1] https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/test/spi/schema.sql\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Wed, 13 Feb 2013 09:00:08 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partition insert trigger using C language"
},
{
"msg_contents": "2013/2/13 Matheus de Oliveira <[email protected]>\n\n>\n> Hi Ali,\n>\n> That is probably because you did not passed a parameter when defined the\n> trigger. You can follow the model at [1]. When creating the trigger, you\n> have to use a string parameter with the name of the field with the date\n> value used for the partition.\n>\n> Let me know if you find any other problem.\n>\n> [1]\n> https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/test/spi/schema.sql\n>\n> Regards,\n> --\n> Matheus de Oliveira\n> Analista de Banco de Dados\n> Dextra Sistemas - MPS.Br nível F!\n> www.dextra.com.br/postgres\n>\n\n Hi Matheus,\nYes. You are right. Now it's OK.\nI thought that the trigger function cannont vehicle any arguments. I had\nmis-interpreted this phrase of the\nDocumentation<http://www.postgresql.org/docs/9.2/static/sql-createtrigger.html>:\n*function_name** *\n\n*A user-supplied function that is declared as taking no arguments and\nreturning type trigger, which is executed when the trigger fires.*\n\nThanks and best regards\nAli\n\n2013/2/13 Matheus de Oliveira <[email protected]>\nHi Ali,That is probably because you did not passed a parameter when defined the trigger. You can follow the model at [1]. When creating the trigger, you have to use a string parameter with the name of the field with the date value used for the partition.\nLet me know if you find any other problem.[1] https://github.com/matheusoliveira/pg_partitioning_tests/blob/master/test/spi/schema.sql\n\nRegards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres\n Hi Matheus,Yes. You are right. Now it's OK.I thought that the trigger function cannont vehicle any arguments. I had mis-interpreted this phrase of the Documentation :\nfunction_name\nA user-supplied function that is declared as taking no\n arguments and returning type trigger, which is executed when the trigger\n fires.\nThanks and best regardsAli",
"msg_date": "Fri, 15 Feb 2013 09:47:20 +0100",
"msg_from": "Ali Pouya <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partition insert trigger using C language"
}
] |
[
{
"msg_contents": "Hi!\n\nI see a massive performance drop when writing a large transaction. I'm writing data for 33 tables with COPY FROM directly from streams in Scala/Java. Over all tables there are 2.2M records which are unevenly distributed from 1 record to 315k records in some tables.\n\nFor comparison I ran a test where I used UNLOGGED tables, no PK/FK constraints, nor other constraints or indexes and no triggers for all tables. The insert rate for this scenario is well above 105k records/second over all tables (which I think is really cool!)\n\nTurning everything on (but still with UNLOGGED tables), i.e. PK/FK, additional indexes, some column check constraints and a trigger for each table which basically insert one additional record to another table, the rates dropped expectedly to around 6k to 7k records/second.\n\nExcept - and that's the wall I'm hitting - for one table which yielded just 75 records/second.\nThe main 'problem' seem to be the FK constraints. Dropping just them restored insert performance for this table to 6k records/s. The table in question has a composite PK (3 columns), 3 foreign keys and a bunch of indexes (see table obj_item_loc at the end of the mail). Compared to the other 32 tables nothing unusual.\nI'd gladly supply more information if necessary.\n\nDropping and recreating constraints/indexes is (for now) no viable alternative, since I have to write such transaction into an already populated database.\nWhat I'm trying to understand is, which limit it is I'm hitting here. I need some advice how to 'profile' this situation.\n\nConfiguration is more or less standard, except WAL settings (which should not be relevant here).\nEnterpriseDB One Click installer.\n\nAny hint is really appreciated. \nThanks!\n\n--\nHorst Dehmer\n\n\n\n\"PostgreSQL 9.2.1 on x86_64-apple-darwin, compiled by i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.9.00), 64-bit\"\nOS X 10.8.2\nMid-2012 MacBook Pro 16 GB, 512 GB SSD\n\n\"bytea_output\";\"escape\"\n\"checkpoint_completion_target\";\"0.9\"\n\"checkpoint_segments\";\"32\"\n\"client_encoding\";\"UNICODE\"\n\"client_min_messages\";\"notice\"\n\"lc_collate\";\"en_US.UTF-8\"\n\"lc_ctype\";\"en_US.UTF-8\"\n\"listen_addresses\";\"*\"\n\"log_checkpoints\";\"on\"\n\"log_destination\";\"stderr\"\n\"log_line_prefix\";\"%t \"\n\"logging_collector\";\"on\"\n\"max_connections\";\"100\"\n\"max_stack_depth\";\"2MB\"\n\"port\";\"5432\"\n\"server_encoding\";\"UTF8\"\n\"shared_buffers\";\"24MB\"\n\"TimeZone\";\"Europe/Vienna\"\n\"wal_buffers\";\"768kB\"\n\n/etc/sysctl.conf\nkern.sysv.shmmax=1610612736\nkern.sysv.shmall=393216\nkern.sysv.shmmin=1\nkern.sysv.shmmni=256\nkern.sysv.shmseg=64\nkern.maxprocperuid=512\nkern.maxproc=2048\n\nCREATE TABLE obj_item_loc\n(\n obj_item_id numeric(20,0) NOT NULL,\n loc_id numeric(20,0) NOT NULL,\n obj_item_loc_ix numeric(20,0) NOT NULL, \n ver_acc_dim numeric(12,3), \n horz_acc_dim numeric(12,3), \n brng_angle numeric(7,4), \n brng_acc_angle numeric(7,4), \n brng_precision_code character varying(6), \n incl_angle numeric(7,4), \n incl_acc_angle numeric(7,4), \n incl_precision_code character varying(6), \n speed_rate numeric(8,4), \n speed_acc_rate numeric(8,4), \n speed_precision_code character varying(6), \n meaning_code character varying(6), \n rel_speed_code character varying(6), \n rptd_id numeric(20,0) NOT NULL,\n creator_id numeric(20,0) NOT NULL, \n update_seqnr numeric(15,0) NOT NULL, \n rec_id bigint DEFAULT nextval('rec_seq'::regclass),\n CONSTRAINT obj_item_loc_pkey PRIMARY KEY (obj_item_id, loc_id, obj_item_loc_ix),\n CONSTRAINT obj_item_loc_4fbc75641175ef1757ca310dd34e34ee_fkey FOREIGN KEY (obj_item_id)\n REFERENCES obj_item (obj_item_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT obj_item_loc_7895d64f5557b1e382c36d41212a3696_fkey FOREIGN KEY (rptd_id)\n REFERENCES rptd (rptd_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT obj_item_loc_8d919243f69bcc599873caca07ac9888_fkey FOREIGN KEY (loc_id)\n REFERENCES loc (loc_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT obj_item_loc_brng_acc_angle_ra_check CHECK (br_check_305(brng_acc_angle >= 0::numeric AND brng_acc_angle <= 359.9999, 'obj_item_loc'::text, 'brng_acc_angle'::text, brng_acc_angle::text)),\n CONSTRAINT obj_item_loc_brng_angle_ra_check CHECK (br_check_305(brng_angle >= 0::numeric AND brng_angle <= 359.9999, 'obj_item_loc'::text, 'brng_angle'::text, brng_angle::text)),\n CONSTRAINT obj_item_loc_brng_precision_code_check CHECK (br_check_305(brng_precision_code::text = ANY (ARRAY['1000MN'::text, '100MN'::text, '100SEC'::text, '10DEG'::text, '10MN'::text, '10SEC'::text, 'DEGREE'::text, 'MIL'::text, 'MINUTE'::text, 'SECOND'::text]), 'obj_item_loc'::text, 'brng_precision_code'::text, brng_precision_code::text)),\n CONSTRAINT obj_item_loc_incl_acc_angle_ra_check CHECK (br_check_305(incl_acc_angle >= 0::numeric AND incl_acc_angle <= 359.9999, 'obj_item_loc'::text, 'incl_acc_angle'::text, incl_acc_angle::text)),\n CONSTRAINT obj_item_loc_incl_angle_ra_check CHECK (br_check_305(incl_angle >= 0::numeric AND incl_angle <= 359.9999, 'obj_item_loc'::text, 'incl_angle'::text, incl_angle::text)),\n CONSTRAINT obj_item_loc_incl_precision_code_check CHECK (br_check_305(incl_precision_code::text = ANY (ARRAY['1000MN'::text, '100MN'::text, '100SEC'::text, '10DEG'::text, '10MN'::text, '10SEC'::text, 'DEGREE'::text, 'MIL'::text, 'MINUTE'::text, 'SECOND'::text]), 'obj_item_loc'::text, 'incl_precision_code'::text, incl_precision_code::text)),\n CONSTRAINT obj_item_loc_meaning_code_check CHECK (br_check_305(meaning_code::text = ANY (ARRAY['CEOFMA'::text, 'SHAPE'::text, 'LNBRNG'::text, 'ASSCP'::text, 'COM'::text, 'CMDDET'::text, 'SOUND'::text, 'DSPCTR'::text, 'FRMCTR'::text, 'POSOIM'::text, 'CTRMNB'::text, 'ORGPRL'::text, 'STDPOS'::text, 'ADRPRP'::text]), 'obj_item_loc'::text, 'meaning_code'::text, meaning_code::text)),\n CONSTRAINT obj_item_loc_rel_speed_code_check CHECK (br_check_305(rel_speed_code::text = ANY (ARRAY['FAST'::text, 'MEDIUM'::text, 'ZERO'::text, 'SLOW'::text]), 'obj_item_loc'::text, 'rel_speed_code'::text, rel_speed_code::text)),\n CONSTRAINT obj_item_loc_speed_precision_code_check CHECK (br_check_305(speed_precision_code::text = ANY (ARRAY['KPH'::text, 'KNOTS'::text, 'MPS'::text]), 'obj_item_loc'::text, 'speed_precision_code'::text, speed_precision_code::text))\n)\nWITH (\n OIDS=FALSE,\n autovacuum_enabled=true\n);\nALTER TABLE obj_item_loc\n OWNER TO postgres;\n\nCREATE INDEX obj_item_loc_ix_rec_id\n ON obj_item_loc\n USING btree\n (rec_id);\n\nCREATE INDEX obj_item_loc_loc_id_idx\n ON obj_item_loc\n USING btree\n (loc_id);\n\nCREATE INDEX obj_item_loc_obj_item_id_idx\n ON obj_item_loc\n USING btree\n (obj_item_id);\n\nCREATE INDEX obj_item_loc_rptd_id_idx\n ON obj_item_loc\n USING btree\n (rptd_id);\n\nCREATE TRIGGER trg_01_obj_item_loc_before_insert\n BEFORE INSERT\n ON obj_item_loc\n FOR EACH ROW\n EXECUTE PROCEDURE obj_item_loc_before_insert();\n\n\nHi!I see a massive performance drop when writing a large transaction. I'm writing data for 33 tables with COPY FROM directly from streams in Scala/Java. Over all tables there are 2.2M records which are unevenly distributed from 1 record to 315k records in some tables.For comparison I ran a test where I used UNLOGGED tables, no PK/FK constraints, nor other constraints or indexes and no triggers for all tables. The insert rate for this scenario is well above 105k records/second over all tables (which I think is really cool!)Turning everything on (but still with UNLOGGED tables), i.e. PK/FK, additional indexes, some column check constraints and a trigger for each table which basically insert one additional record to another table, the rates dropped expectedly to around 6k to 7k records/second.Except - and that's the wall I'm hitting - for one table which yielded just 75 records/second.The main 'problem' seem to be the FK constraints. Dropping just them restored insert performance for this table to 6k records/s. The table in question has a composite PK (3 columns), 3 foreign keys and a bunch of indexes (see table obj_item_loc at the end of the mail). Compared to the other 32 tables nothing unusual.I'd gladly supply more information if necessary.Dropping and recreating constraints/indexes is (for now) no viable alternative, since I have to write such transaction into an already populated database.What I'm trying to understand is, which limit it is I'm hitting here. I need some advice how to 'profile' this situation.Configuration is more or less standard, except WAL settings (which should not be relevant here).EnterpriseDB One Click installer.Any hint is really appreciated. Thanks!--Horst Dehmer\"PostgreSQL 9.2.1 on x86_64-apple-darwin, compiled by i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.9.00), 64-bit\"OS X 10.8.2Mid-2012 MacBook Pro 16 GB, 512 GB SSD\"bytea_output\";\"escape\"\"checkpoint_completion_target\";\"0.9\"\"checkpoint_segments\";\"32\"\"client_encoding\";\"UNICODE\"\"client_min_messages\";\"notice\"\"lc_collate\";\"en_US.UTF-8\"\"lc_ctype\";\"en_US.UTF-8\"\"listen_addresses\";\"*\"\"log_checkpoints\";\"on\"\"log_destination\";\"stderr\"\"log_line_prefix\";\"%t \"\"logging_collector\";\"on\"\"max_connections\";\"100\"\"max_stack_depth\";\"2MB\"\"port\";\"5432\"\"server_encoding\";\"UTF8\"\"shared_buffers\";\"24MB\"\"TimeZone\";\"Europe/Vienna\"\"wal_buffers\";\"768kB\"/etc/sysctl.confkern.sysv.shmmax=1610612736kern.sysv.shmall=393216kern.sysv.shmmin=1kern.sysv.shmmni=256kern.sysv.shmseg=64kern.maxprocperuid=512kern.maxproc=2048CREATE TABLE obj_item_loc( obj_item_id numeric(20,0) NOT NULL, loc_id numeric(20,0) NOT NULL, obj_item_loc_ix numeric(20,0) NOT NULL, ver_acc_dim numeric(12,3), horz_acc_dim numeric(12,3), brng_angle numeric(7,4), brng_acc_angle numeric(7,4), brng_precision_code character varying(6), incl_angle numeric(7,4), incl_acc_angle numeric(7,4), incl_precision_code character varying(6), speed_rate numeric(8,4), speed_acc_rate numeric(8,4), speed_precision_code character varying(6), meaning_code character varying(6), rel_speed_code character varying(6), rptd_id numeric(20,0) NOT NULL, creator_id numeric(20,0) NOT NULL, update_seqnr numeric(15,0) NOT NULL, rec_id bigint DEFAULT nextval('rec_seq'::regclass), CONSTRAINT obj_item_loc_pkey PRIMARY KEY (obj_item_id, loc_id, obj_item_loc_ix), CONSTRAINT obj_item_loc_4fbc75641175ef1757ca310dd34e34ee_fkey FOREIGN KEY (obj_item_id) REFERENCES obj_item (obj_item_id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT obj_item_loc_7895d64f5557b1e382c36d41212a3696_fkey FOREIGN KEY (rptd_id) REFERENCES rptd (rptd_id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT obj_item_loc_8d919243f69bcc599873caca07ac9888_fkey FOREIGN KEY (loc_id) REFERENCES loc (loc_id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT obj_item_loc_brng_acc_angle_ra_check CHECK (br_check_305(brng_acc_angle >= 0::numeric AND brng_acc_angle <= 359.9999, 'obj_item_loc'::text, 'brng_acc_angle'::text, brng_acc_angle::text)), CONSTRAINT obj_item_loc_brng_angle_ra_check CHECK (br_check_305(brng_angle >= 0::numeric AND brng_angle <= 359.9999, 'obj_item_loc'::text, 'brng_angle'::text, brng_angle::text)), CONSTRAINT obj_item_loc_brng_precision_code_check CHECK (br_check_305(brng_precision_code::text = ANY (ARRAY['1000MN'::text, '100MN'::text, '100SEC'::text, '10DEG'::text, '10MN'::text, '10SEC'::text, 'DEGREE'::text, 'MIL'::text, 'MINUTE'::text, 'SECOND'::text]), 'obj_item_loc'::text, 'brng_precision_code'::text, brng_precision_code::text)), CONSTRAINT obj_item_loc_incl_acc_angle_ra_check CHECK (br_check_305(incl_acc_angle >= 0::numeric AND incl_acc_angle <= 359.9999, 'obj_item_loc'::text, 'incl_acc_angle'::text, incl_acc_angle::text)), CONSTRAINT obj_item_loc_incl_angle_ra_check CHECK (br_check_305(incl_angle >= 0::numeric AND incl_angle <= 359.9999, 'obj_item_loc'::text, 'incl_angle'::text, incl_angle::text)), CONSTRAINT obj_item_loc_incl_precision_code_check CHECK (br_check_305(incl_precision_code::text = ANY (ARRAY['1000MN'::text, '100MN'::text, '100SEC'::text, '10DEG'::text, '10MN'::text, '10SEC'::text, 'DEGREE'::text, 'MIL'::text, 'MINUTE'::text, 'SECOND'::text]), 'obj_item_loc'::text, 'incl_precision_code'::text, incl_precision_code::text)), CONSTRAINT obj_item_loc_meaning_code_check CHECK (br_check_305(meaning_code::text = ANY (ARRAY['CEOFMA'::text, 'SHAPE'::text, 'LNBRNG'::text, 'ASSCP'::text, 'COM'::text, 'CMDDET'::text, 'SOUND'::text, 'DSPCTR'::text, 'FRMCTR'::text, 'POSOIM'::text, 'CTRMNB'::text, 'ORGPRL'::text, 'STDPOS'::text, 'ADRPRP'::text]), 'obj_item_loc'::text, 'meaning_code'::text, meaning_code::text)), CONSTRAINT obj_item_loc_rel_speed_code_check CHECK (br_check_305(rel_speed_code::text = ANY (ARRAY['FAST'::text, 'MEDIUM'::text, 'ZERO'::text, 'SLOW'::text]), 'obj_item_loc'::text, 'rel_speed_code'::text, rel_speed_code::text)), CONSTRAINT obj_item_loc_speed_precision_code_check CHECK (br_check_305(speed_precision_code::text = ANY (ARRAY['KPH'::text, 'KNOTS'::text, 'MPS'::text]), 'obj_item_loc'::text, 'speed_precision_code'::text, speed_precision_code::text)))WITH ( OIDS=FALSE, autovacuum_enabled=true);ALTER TABLE obj_item_loc OWNER TO postgres;CREATE INDEX obj_item_loc_ix_rec_id ON obj_item_loc USING btree (rec_id);CREATE INDEX obj_item_loc_loc_id_idx ON obj_item_loc USING btree (loc_id);CREATE INDEX obj_item_loc_obj_item_id_idx ON obj_item_loc USING btree (obj_item_id);CREATE INDEX obj_item_loc_rptd_id_idx ON obj_item_loc USING btree (rptd_id);CREATE TRIGGER trg_01_obj_item_loc_before_insert BEFORE INSERT ON obj_item_loc FOR EACH ROW EXECUTE PROCEDURE obj_item_loc_before_insert();",
"msg_date": "Sat, 12 Jan 2013 00:55:36 +0100",
"msg_from": "Horst Dehmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "On Friday, January 11, 2013, Horst Dehmer wrote:\n\n>\n> Except - and that's the wall I'm hitting - for one table which yielded\n> just 75 records/second.\n> The main 'problem' seem to be the FK constraints. Dropping just them\n> restored insert performance for this table to 6k records/s.\n>\n\nIt sure sounds like you don't have enough RAM to hold the foreign-key table\ndata needed to check the constraints, so every insert needs one disk\nrevolution to fetch the data.\n\nIf you drop the indexes and constraints one at a time until it speeds up,\nis there a certain one that is the culprit?\n\nYou can look in pg_statio_user_tables to see what tables and indexes have\nhigh io being driven by the bulk loading.\n\nUse \"top\" to see of the server is mostly IO bound or CPU bound.\n\nCheers,\n\nJeff\n\nOn Friday, January 11, 2013, Horst Dehmer wrote:\nExcept - and that's the wall I'm hitting - for one table which yielded just 75 records/second.The main 'problem' seem to be the FK constraints. Dropping just them restored insert performance for this table to 6k records/s. \nIt sure sounds like you don't have enough RAM to hold the foreign-key table data needed to check the constraints, so every insert needs one disk revolution to fetch the data.\nIf you drop the indexes and constraints one at a time until it speeds up, is there a certain one that is the culprit? You can look in pg_statio_user_tables to see what tables and indexes have high io being driven by the bulk loading.\nUse \"top\" to see of the server is mostly IO bound or CPU bound.Cheers,Jeff",
"msg_date": "Fri, 11 Jan 2013 17:17:40 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "On Fri, Jan 11, 2013 at 8:55 PM, Horst Dehmer <[email protected]> wrote:\n> Except - and that's the wall I'm hitting - for one table which yielded just\n> 75 records/second.\n> The main 'problem' seem to be the FK constraints. Dropping just them\n> restored insert performance for this table to 6k records/s. The table in\n> question has a composite PK (3 columns), 3 foreign keys and a bunch of\n> indexes (see table obj_item_loc at the end of the mail). Compared to the\n> other 32 tables nothing unusual.\n> I'd gladly supply more information if necessary.\n...\n> CREATE TABLE obj_item_loc\n> (\n> obj_item_id numeric(20,0) NOT NULL,\n> loc_id numeric(20,0) NOT NULL,\n> obj_item_loc_ix numeric(20,0) NOT NULL,\n\nThat sounds a lot like a missing index on the target relations (or\nindices that are unusable).\n\nThose numeric ids look really unusual. Why not bigint? It's close to\nthe same precision, but native, faster, more compact, and quite\nunambiguous when indices are involved. If the types don't match on\nboth tables, it's quite likely indices won't be used when checking the\nFK, and that spells trouble.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 11 Jan 2013 22:17:59 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "Yes, the ids is something I don't like either.\nThey carry additional semantics, which I cannot make go away.\nHow are chances char(20) is more time efficient than numeric(20)?\nDisk space is no problem here.\n\n\nOn 12.01.2013, at 02:17, Claudio Freire <[email protected]> wrote:\n\n> On Fri, Jan 11, 2013 at 8:55 PM, Horst Dehmer <[email protected]> wrote:\n>> Except - and that's the wall I'm hitting - for one table which yielded just\n>> 75 records/second.\n>> The main 'problem' seem to be the FK constraints. Dropping just them\n>> restored insert performance for this table to 6k records/s. The table in\n>> question has a composite PK (3 columns), 3 foreign keys and a bunch of\n>> indexes (see table obj_item_loc at the end of the mail). Compared to the\n>> other 32 tables nothing unusual.\n>> I'd gladly supply more information if necessary.\n> ...\n>> CREATE TABLE obj_item_loc\n>> (\n>> obj_item_id numeric(20,0) NOT NULL,\n>> loc_id numeric(20,0) NOT NULL,\n>> obj_item_loc_ix numeric(20,0) NOT NULL,\n> \n> That sounds a lot like a missing index on the target relations (or\n> indices that are unusable).\n> \n> Those numeric ids look really unusual. Why not bigint? It's close to\n> the same precision, but native, faster, more compact, and quite\n> unambiguous when indices are involved. If the types don't match on\n> both tables, it's quite likely indices won't be used when checking the\n> FK, and that spells trouble.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 12 Jan 2013 21:16:02 +0100",
"msg_from": "Horst Dehmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "On Fri, Jan 11, 2013 at 5:17 PM, Claudio Freire <[email protected]> wrote:\n> On Fri, Jan 11, 2013 at 8:55 PM, Horst Dehmer <[email protected]> wrote:\n>> Except - and that's the wall I'm hitting - for one table which yielded just\n>> 75 records/second.\n>> The main 'problem' seem to be the FK constraints. Dropping just them\n>> restored insert performance for this table to 6k records/s. The table in\n>> question has a composite PK (3 columns), 3 foreign keys and a bunch of\n>> indexes (see table obj_item_loc at the end of the mail). Compared to the\n>> other 32 tables nothing unusual.\n>> I'd gladly supply more information if necessary.\n> ...\n>> CREATE TABLE obj_item_loc\n>> (\n>> obj_item_id numeric(20,0) NOT NULL,\n>> loc_id numeric(20,0) NOT NULL,\n>> obj_item_loc_ix numeric(20,0) NOT NULL,\n>\n> That sounds a lot like a missing index on the target relations (or\n> indices that are unusable).\n>\n> Those numeric ids look really unusual. Why not bigint? It's close to\n> the same precision, but native, faster, more compact, and quite\n> unambiguous when indices are involved. If the types don't match on\n> both tables, it's quite likely indices won't be used when checking the\n> FK, and that spells trouble.\n\nWill PG allow you to add a FK constraint where there is no usable\nindex on the referenced side?\n\nI have failed to do so, but perhaps I am not being devious enough.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 12 Jan 2013 13:23:33 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "On Sat, Jan 12, 2013 at 5:16 PM, Horst Dehmer <[email protected]> wrote:\n> Yes, the ids is something I don't like either.\n> They carry additional semantics, which I cannot make go away.\n> How are chances char(20) is more time efficient than numeric(20)?\n> Disk space is no problem here.\n\nWhat are the other tables like then?\n\nThe exact data types involved are at issue here, so it matters.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 12 Jan 2013 19:18:05 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> Will PG allow you to add a FK constraint where there is no usable\n> index on the referenced side?\n\nIt will not, because the referenced side must have a unique constraint,\nie an index.\n\nThe standard performance gotcha here is not having an index on the\nreferencing side. But that only hurts when doing UPDATEs/DELETEs of\nreferenced-side keys, which as far as I gathered was not the OP's\nscenario.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 12 Jan 2013 17:26:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "The types referenced by the foreign keys are the same Numeric(20). \nSince the complete schema (of about 300 tables) is generated, I will just try char(20) instead of numeric(20) in the next days to see if it makes any difference. Which I somehow doubt. \n\nBut first I'm following the lead of the tables/indexes iostats given by Jeff.\n\n\nobj_item_loc references the following three tables and there should be no surprises.\n\nCREATE UNLOGGED TABLE loc\n(\n loc_id numeric(20,0) NOT NULL, \n...\n CONSTRAINT loc_pkey PRIMARY KEY (loc_id),\n…\n)\n\nCREATE UNLOGGED TABLE obj_item\n(\n obj_item_id numeric(20,0) NOT NULL, \n...\n CONSTRAINT obj_item_pkey PRIMARY KEY (obj_item_id),\n…\n)\n\nCREATE UNLOGGED TABLE rptd\n(\n rptd_id numeric(20,0) NOT NULL, \n...\n CONSTRAINT rptd_pkey PRIMARY KEY (rptd_id),\n…\n)\n\n\nOn 12.01.2013, at 23:18, Claudio Freire <[email protected]> wrote:\n\n> On Sat, Jan 12, 2013 at 5:16 PM, Horst Dehmer <[email protected]> wrote:\n>> Yes, the ids is something I don't like either.\n>> They carry additional semantics, which I cannot make go away.\n>> How are chances char(20) is more time efficient than numeric(20)?\n>> Disk space is no problem here.\n> \n> What are the other tables like then?\n> \n> The exact data types involved are at issue here, so it matters.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 12 Jan 2013 23:41:53 +0100",
"msg_from": "Horst Dehmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "On Sat, Jan 12, 2013 at 7:41 PM, Horst Dehmer <[email protected]> wrote:\n> Since the complete schema (of about 300 tables) is generated, I will just try char(20) instead of numeric(20) in the next days to see if it makes any difference. Which I somehow doubt.\n\nI think that might just make it worse.\n\nWell, maybe the others were right, and it's just that you're hitting\nthe disk on that particular table.\n\nThat, or it's all those CHECK constraints. Have you tried removing the\nCHECK constraints (they're a heapload of function calls)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 12 Jan 2013 22:52:46 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "After more testing I have gained some insights:\n\nThe foreign key constraints are NOT responsible for the low COPY FROM performance in my case. I forgot about the indexes which are created along with the FK constraints.\n\nBesides the primary key\n\nCONSTRAINT obj_item_loc_pkey PRIMARY KEY (obj_item_id, loc_id, obj_item_loc_ix),\n\nthe table OBJ_ITEM_LOC has four additional indexes (let's call them idx_1 through idx_4)\n\nCREATE INDEX idx_1 ON obj_item_loc USING btree (rec_id);\nCREATE INDEX idx_2 ON obj_item_loc USING btree (loc_id);\nCREATE INDEX idx_3 ON obj_item_loc USING btree (rptd_id);\nCREATE INDEX idx_4 ON obj_item_loc USING btree (obj_item_id);\n\nThe indexes 2 to 4 are intended to speed up joins between OBJ_ITEM_LOC and\nLOC (loc_id), RPTD (rptd_id) and OBJ_ITEM (obj_item) respectively (and I'm highly suspicious if this makes sense at all.)\n\nidx_4 together with a simple select in the tables on-insert trigger is slowing things down considerably.\nWith idx_4 and the trigger rates are\n\n 44100 rows, 0:00:04.576, 9637 r/s: LOC\n 2101 rows, 0:00:00.221, 9506 r/s: OBJ_ITEM\n 2101 rows, 0:00:00.278, 7557 r/s: ORG\n 94713 rows, 0:00:18.502, 5119 r/s: RPTD\n 44100 rows, 0:03:03.437, 240 r/s: OBJ_ITEM_LOC\nimported 187115 record in 0:03:27.081 => 903 r/s\n\npg_statio comes up with same big numbers (reads = bad, hits = not so bad?):\n\n relname | heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit \n--------------+----------------+---------------+---------------+--------------\n obj_item_loc | 1262 | 9908013 | 1199 | 1682005\n rptd | 4434 | 279022 | 1806 | 1270746\n org | 38 | 191559 | 19 | 201071\n obj_item | 84 | 92476 | 29 | 104134\n loc | 768 | 88902 | 597 | 352680\n(5 rows)\n\nDropping idx_1, idx_2 and idx_3 at the same time has no significant impact. But take away idx_4 only:\n\n 44100 rows, 0:00:04.558, 9675 r/s: LOC\n 2101 rows, 0:00:00.220, 9593 r/s: OBJ_ITEM\n 2101 rows, 0:00:00.275, 7640 r/s: ORG\n 94713 rows, 0:00:18.407, 5145 r/s: RPTD\n 44100 rows, 0:00:11.433, 3857 r/s: OBJ_ITEM_LOC\nimported 187115 record in 0:00:34.938 => 5355 r/s\n\nHm, not bad. Now for the select statement in the on insert trigger:\n\n \tSELECT\t* \n\tFROM \tobj_item_loc \n\tWHERE \tobj_item_loc.obj_item_id = NEW.obj_item_id \n\tAND\tobj_item_loc.loc_id = NEW.loc_id \n\tAND \tobj_item_loc.obj_item_loc_ix = NEW.obj_item_loc_ix \n\tINTO\told;\n\nExecuting this query AFTER the bulk insert (and probably some auto-vacuuming) the query plan looks like this\n\n\texplain analyze \n\tselect \t* \n\tfrom \tobj_item_loc \n\twhere \t(obj_item_id, loc_id, obj_item_loc_ix) =\n\t\t(10903011224100014650,10903010224100089226,10900024100000140894)\n\nQUERY PLAN \n--------------\n Index Scan using obj_item_loc_loc_id_idx on obj_item_loc \n\t(cost=0.00..8.36 rows=1 width=329) \n\t(actual time=0.039..0.040 rows=1 loops=1)\n Index Cond: (loc_id = 10903010224100089226::numeric)\n Filter: ((obj_item_id = 10903011224100014650::numeric) AND \n\t(obj_item_loc_ix = 10900024100000140894::numeric))\n Total runtime: 0.079 ms\n\nAfter some head-scratching I realized that obj_item_id is just referencing a meager 2101 rows which probably makes not for a good index candidate. So, the query plan make some sense, I guess.\n\nNow I have some (more) questions:\n\n1. How do I know which index (if any) is chosen for a select statement inside a trigger during a bulk load transaction? (or for that matter: a series of recursive plpgsql functions)\n2. The query planner depends on stats collected by auto-vacuum/vacuum analyze, right? Does stats collecting also happen during a lengthy transaction? \n3. Is it possible (or even advisable) to trigger vacuum analyze inside an ongoing transaction. Let's say load 10,000 rows of table A, analyze table A, insert the next 10,000 rows, analyze again, ...\n\nI'm sorry if this is basic stuff I'm asking here, but especially point 2 is bothering me.\n\n--\nKind regards\nHorst Dehmer\n\nOn 12.01.2013, at 01:17, Jeff Janes <[email protected]> wrote:\n\n> On Friday, January 11, 2013, Horst Dehmer wrote:\n> \n> Except - and that's the wall I'm hitting - for one table which yielded just 75 records/second.\n> The main 'problem' seem to be the FK constraints. Dropping just them restored insert performance for this table to 6k records/s.\n> \n> It sure sounds like you don't have enough RAM to hold the foreign-key table data needed to check the constraints, so every insert needs one disk revolution to fetch the data.\n> \n> If you drop the indexes and constraints one at a time until it speeds up, is there a certain one that is the culprit? \n> \n> You can look in pg_statio_user_tables to see what tables and indexes have high io being driven by the bulk loading.\n> \n> Use \"top\" to see of the server is mostly IO bound or CPU bound.\n> \n> Cheers,\n> \n> Jeff\n\n\nAfter more testing I have gained some insights:The foreign key constraints are NOT responsible for the low COPY FROM performance in my case. I forgot about the indexes which are created along with the FK constraints.Besides the primary keyCONSTRAINT obj_item_loc_pkey PRIMARY KEY (obj_item_id, loc_id, obj_item_loc_ix),the table OBJ_ITEM_LOC has four additional indexes (let's call them idx_1 through idx_4)CREATE INDEX idx_1 ON obj_item_loc USING btree (rec_id);CREATE INDEX idx_2 ON obj_item_loc USING btree (loc_id);CREATE INDEX idx_3 ON obj_item_loc USING btree (rptd_id);CREATE INDEX idx_4 ON obj_item_loc USING btree (obj_item_id);The indexes 2 to 4 are intended to speed up joins between OBJ_ITEM_LOC andLOC (loc_id), RPTD (rptd_id) and OBJ_ITEM (obj_item) respectively (and I'm highly suspicious if this makes sense at all.)idx_4 together with a simple select in the tables on-insert trigger is slowing things down considerably.With idx_4 and the trigger rates are 44100 rows, 0:00:04.576, 9637 r/s: LOC 2101 rows, 0:00:00.221, 9506 r/s: OBJ_ITEM 2101 rows, 0:00:00.278, 7557 r/s: ORG 94713 rows, 0:00:18.502, 5119 r/s: RPTD 44100 rows, 0:03:03.437, 240 r/s: OBJ_ITEM_LOCimported 187115 record in 0:03:27.081 => 903 r/spg_statio comes up with same big numbers (reads = bad, hits = not so bad?): relname | heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit --------------+----------------+---------------+---------------+-------------- obj_item_loc | 1262 | 9908013 | 1199 | 1682005 rptd | 4434 | 279022 | 1806 | 1270746 org | 38 | 191559 | 19 | 201071 obj_item | 84 | 92476 | 29 | 104134 loc | 768 | 88902 | 597 | 352680(5 rows)Dropping idx_1, idx_2 and idx_3 at the same time has no significant impact. But take away idx_4 only: 44100 rows, 0:00:04.558, 9675 r/s: LOC 2101 rows, 0:00:00.220, 9593 r/s: OBJ_ITEM 2101 rows, 0:00:00.275, 7640 r/s: ORG 94713 rows, 0:00:18.407, 5145 r/s: RPTD 44100 rows, 0:00:11.433, 3857 r/s: OBJ_ITEM_LOCimported 187115 record in 0:00:34.938 => 5355 r/sHm, not bad. Now for the select statement in the on insert trigger: SELECT * FROM obj_item_loc WHERE obj_item_loc.obj_item_id = NEW.obj_item_id AND obj_item_loc.loc_id = NEW.loc_id AND obj_item_loc.obj_item_loc_ix = NEW.obj_item_loc_ix INTO old;Executing this query AFTER the bulk insert (and probably some auto-vacuuming) the query plan looks like this explain analyze select * from obj_item_loc where (obj_item_id, loc_id, obj_item_loc_ix) = (10903011224100014650,10903010224100089226,10900024100000140894)QUERY PLAN -------------- Index Scan using obj_item_loc_loc_id_idx on obj_item_loc (cost=0.00..8.36 rows=1 width=329) (actual time=0.039..0.040 rows=1 loops=1) Index Cond: (loc_id = 10903010224100089226::numeric) Filter: ((obj_item_id = 10903011224100014650::numeric) AND (obj_item_loc_ix = 10900024100000140894::numeric)) Total runtime: 0.079 msAfter some head-scratching I realized that obj_item_id is just referencing a meager 2101 rows which probably makes not for a good index candidate. So, the query plan make some sense, I guess.Now I have some (more) questions:1. How do I know which index (if any) is chosen for a select statement inside a trigger during a bulk load transaction? (or for that matter: a series of recursive plpgsql functions)2. The query planner depends on stats collected by auto-vacuum/vacuum analyze, right? Does stats collecting also happen during a lengthy transaction? 3. Is it possible (or even advisable) to trigger vacuum analyze inside an ongoing transaction. Let's say load 10,000 rows of table A, analyze table A, insert the next 10,000 rows, analyze again, ...I'm sorry if this is basic stuff I'm asking here, but especially point 2 is bothering me.--Kind regardsHorst DehmerOn 12.01.2013, at 01:17, Jeff Janes <[email protected]> wrote:On Friday, January 11, 2013, Horst Dehmer wrote:\nExcept - and that's the wall I'm hitting - for one table which yielded just 75 records/second.The main 'problem' seem to be the FK constraints. Dropping just them restored insert performance for this table to 6k records/s. \nIt sure sounds like you don't have enough RAM to hold the foreign-key table data needed to check the constraints, so every insert needs one disk revolution to fetch the data.\nIf you drop the indexes and constraints one at a time until it speeds up, is there a certain one that is the culprit? You can look in pg_statio_user_tables to see what tables and indexes have high io being driven by the bulk loading.\nUse \"top\" to see of the server is mostly IO bound or CPU bound.Cheers,Jeff",
"msg_date": "Tue, 15 Jan 2013 23:44:29 +0000",
"msg_from": "Horst Dehmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "On Tue, Jan 15, 2013 at 3:44 PM, Horst Dehmer <[email protected]> wrote:\n\n\n> idx_4 together with a simple select in the tables on-insert trigger is\n> slowing things down considerably.\n\nSo the theory is that the presence of idx_4 is causing the trigger to\npick a poor plan (i.e. one using idx_4) while its absence removes that\ntemptation?\n\n\n> pg_statio comes up with same big numbers (reads = bad, hits = not so bad?):\n\nTrue disk reads are much more expensive, but given how few reads you\nhave relative to hits, I now think that in aggregate the hits are more\nof a concern than the reads are. In other words, you seem to be CPU\nbound, not IO bound.\n\nEven more so I think (but not with much confidence) that most of your\n\"reads\" are actually coming from the OS cache and not from the disk.\nPG cannot distinguish true disk reads from OS cache reads.\n\nWhen was the last time you reset the stats? That is, are your\nreported numbers accumulated over several loads, with some having idx4\nand some not?\n\n...\n>\n> Now I have some (more) questions:\n>\n> 1. How do I know which index (if any) is chosen for a select statement\n> inside a trigger during a bulk load transaction? (or for that matter: a\n> series of recursive plpgsql functions)\n\nInformally, reset your database to the state it was in before the\nload, analyze it, and do the explain again before you do the load.\n\nMore formally, use use auto_explain and set\nauto_explain.log_nested_statements to true. I haven't verified this\nworks with triggers, just going by the description I think it should.\n\n> 2. The query planner depends on stats collected by auto-vacuum/vacuum\n> analyze, right? Does stats collecting also happen during a lengthy\n> transaction?\n\nMy understanding is that a transaction will not dump its stats until\nthe commit, so the auto analyze will not occur *due to* the lengthy\ntransaction until after it is over. But if the table was already due\nfor analyze anyway due to previous or concurrent shorter transactions,\nthe analyze will happen. However, the lengthy transaction might not\nsee the results of the analyze (I'm not clear on the transaction\nsnapshot semantics of the statistics tables) and even if it did see\nthem, it might just be using cached plans and so would not change the\nplan in the middle.\n\n> 3. Is it possible (or even advisable) to trigger vacuum analyze inside an\n> ongoing transaction. Let's say load 10,000 rows of table A, analyze table A,\n> insert the next 10,000 rows, analyze again, ...\n\nYou can't vacuum inside a transaction. You can analyze, but I don't\nknow if it would be advisable.\n\nYour use case is a little unusual. If you are bulk loading into an\ninitially empty table, usually you would remove the trigger and add it\nafter the load (with some kind of bulk operation to make up for\nwhatever it was the trigger would have been doing). On the other\nhand, if you are bulk loading into a \"live\" table and so can't drop\nthe trigger, then the live table should have good-enough preexisting\nstatistics to make the trigger choose a good plan.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 17 Jan 2013 10:12:06 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance for large transaction with multiple COPY FROM"
},
{
"msg_contents": "Hey Jeff (and others)!\n\nFirst of all: Thanks for your detailed explanations and guide lines.\n\n\nOn 17.01.2013, at 18:12, Jeff Janes <[email protected]> wrote:\n\n> So the theory is that the presence of idx_4 is causing the trigger to\n> pick a poor plan (i.e. one using idx_4) while its absence removes that\n> temptation?\n\nYes. And auto_explain confirms this for the first record (obj_item_loc_obj_item_id_idx = idx_4 from last my last mail):\n\n2013-01-18 22:50:21 CET LOG: duration: 0.021 ms plan:\n\tQuery Text: SELECT * FROM obj_item_loc WHERE obj_item_loc.obj_item_id = NEW.obj_item_id AND obj_item_loc.loc_id = NEW.loc_id AND obj_item_loc.obj_item_loc_ix = NEW.obj_item_loc_ix\n\tIndex Scan using obj_item_loc_obj_item_id_idx on obj_item_loc (cost=0.00..8.27 rows=1 width=382)\n\t Index Cond: (obj_item_id = $15)\n\t Filter: ((loc_id = $16) AND (obj_item_loc_ix = $17))\n2013-01-18 22:50:21 CET CONTEXT: SQL statement \"SELECT * FROM obj_item_loc WHERE obj_item_loc.obj_item_id = NEW.obj_item_id AND obj_item_loc.loc_id = NEW.loc_id AND obj_item_loc.obj_item_loc_ix = NEW.obj_item_loc_ix\"\n\tPL/pgSQL function obj_item_loc_before_insert() line 5 at SQL statement\n\tCOPY obj_item_loc, line 1: \"10903011224100007276\t10903010224100015110\t10900024100000029720\t\\N\t\\N\t\\N\t\\N\t\\N\t\\N\t\\N\t\\N\t\\N\t\\N\t\\N\t\\N\t\\...\"\n\nand for one of the last records:\n\n2013-01-18 22:53:20 CET LOG: duration: 16.088 ms plan:\n\tQuery Text: SELECT * FROM obj_item_loc WHERE obj_item_loc.obj_item_id = NEW.obj_item_id AND obj_item_loc.loc_id = NEW.loc_id AND obj_item_loc.obj_item_loc_ix = NEW.obj_item_loc_ix\n\tIndex Scan using obj_item_loc_obj_item_id_idx on obj_item_loc (cost=0.00..8.27 rows=1 width=382)\n\t Index Cond: (obj_item_id = $15)\n\t Filter: ((loc_id = $16) AND (obj_item_loc_ix = $17))\n\nI see a linear increase of the duration from 0.0x ms to over 16 ms (apart from a few nasty outliers with about 22 ms). Although even at the end there are still a few durations < 0.03 but mostly 15 ms and above.\n\n> True disk reads are much more expensive, but given how few reads you\n> have relative to hits, I now think that in aggregate the hits are more\n> of a concern than the reads are. In other words, you seem to be CPU\n> bound, not IO bound.\n\nYes, definitely CPU bound, as top shows 99+% CPU utilization.\n\n> Even more so I think (but not with much confidence) that most of your\n> \"reads\" are actually coming from the OS cache and not from the disk.\n> PG cannot distinguish true disk reads from OS cache reads.\n> \n> When was the last time you reset the stats? That is, are your\n> reported numbers accumulated over several loads, with some having idx4\n> and some not?\n\nI set up a fresh database before each test run. So the stats should be clean.\n\n> More formally, use use auto_explain and set\n> auto_explain.log_nested_statements to true. I haven't verified this\n> works with triggers, just going by the description I think it should.\n\nNice tip! Works for triggers as well.\n\n> Your use case is a little unusual. If you are bulk loading into an\n> initially empty table, usually you would remove the trigger and add it\n> after the load (with some kind of bulk operation to make up for\n> whatever it was the trigger would have been doing). On the other\n> hand, if you are bulk loading into a \"live\" table and so can't drop\n> the trigger, then the live table should have good-enough preexisting\n> statistics to make the trigger choose a good plan.\n\nMy case is indeed unusual as for the whole model of 276 tables there will never be an update nor a delete on any row.\nThe model is rather short-lived, from a few hours to a few months. COPY FROM/TO are the only ways to get data into the database and back out. And in between there is lots of graph traversal and calculation of convex hulls. But the lengthy transaction are by far not the common case.\n\nHaving said that, I'm no longer sure if a RDBMS is the right tool for the backend. Maybe indexing and storing with a plain full text search engine is. Dunno...\n\nThanks again!\n\n--\nHorst\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 18 Jan 2013 23:15:15 +0000",
"msg_from": "Horst Dehmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert performance for large transaction with multiple COPY FROM"
}
] |
[
{
"msg_contents": "Hi,\n\nEvery month, I search in the mailing lists and in history of wiki page\nabout table partition for any news. But I don't see any moves to implement\nthis feature (aka transparent table partitioning).\nThe last update of wiki was 2011-08-12 !\n\nThis month, Satoshi made an extension (\nhttp://pgsnaga.blogspot.com.br/2013/01/announcing-pgpart-extension-table.html)\nthat helps the creation the trigger, child table, etc,etc,etc, but I think\nthat is only a helper for a 'must have' internal feature.\n\nI'm dreaming with the day that Hubert will put in http://www.depesz.com/:\n\"Waiting for 9.3: Transparent table partitioning!\" ;)\n\nBest regards,\n\nAlexandre\n\nHi,Every month, I search in the mailing lists and in history of wiki page about table partition for any news. But I don't see any moves to implement this feature (aka transparent table partitioning).\nThe last update of wiki was 2011-08-12 !This month, Satoshi made an extension (http://pgsnaga.blogspot.com.br/2013/01/announcing-pgpart-extension-table.html) that helps the creation the trigger, child table, etc,etc,etc, but I think that is only a helper for a 'must have' internal feature.\nI'm dreaming with the day that Hubert will put in http://www.depesz.com/: \"Waiting for 9.3: Transparent table partitioning!\" ;) \nBest regards,Alexandre",
"msg_date": "Sun, 13 Jan 2013 19:49:18 -0200",
"msg_from": "Alexandre de Arruda Paes <[email protected]>",
"msg_from_op": true,
"msg_subject": "From TODO: Simplify creation of partitioned tables"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm running postgresl 9.0. After partitioning a big table, CPU usage raised\nfrom average 5-10% to average 70-80%.\n\n- the table contains about 20.000.000 rows\n- partitions are selected using a trigger, based on an indexed field, a\ndate (IF date_taken >= x AND date_taken < y)\n- I created 5 partitions, the 2012 one now contains most of the rows. The\n2013 partition is the \"live\" partition, mostly insert, a few select based\non the above indexed field. The 2013, 2014, 2015 partitions are empty\n- constraint execution is on.\n\nI have 2 weeks CPU usage reports and the pattern definately changed after I\nmade the partitions. Any idea?\n\nthanks,\n\n-- \nrd\n\nThis is the way the world ends.\nNot with a bang, but a whimper.\n\nHello,I'm running postgresl 9.0. After partitioning a big table, CPU usage raised from average 5-10% to average 70-80%. - the table contains about 20.000.000 rows\n- partitions are selected using a trigger, based on an indexed field, a date (IF date_taken >= x AND date_taken < y)- I created 5 partitions, the 2012 one now contains most of the rows. The 2013 partition is the \"live\" partition, mostly insert, a few select based on the above indexed field. The 2013, 2014, 2015 partitions are empty\n- constraint execution is on. I have 2 weeks CPU usage reports and the pattern definately changed after I made the partitions. Any idea? thanks,\n-- rdThis is the way the world ends.Not with a bang, but a whimper.",
"msg_date": "Mon, 21 Jan 2013 16:05:05 +0100",
"msg_from": "rudi <[email protected]>",
"msg_from_op": true,
"msg_subject": "High CPU usage after partitioning"
},
{
"msg_contents": "I was under the impression that the default_statistics_target was a \npercentage of rows to analyze. Maybe this is not the case?\n\nI ran an analyze during a \"quiet point\" last night and for a few of my \nlarge tables, I didn't get what I consider a reasonable sampling of \nrows. When running with \"verbose\" enabled, it appeared that a maximum \nof 240000 rows were being analyzed, including on tables exceeding 4-8mm \nrows. My default_statistics_target = 80.\n\nShouldn't I be analyzing a larger percentage of these big tables?\n\nWhat is the unit-of-measure used for default_statistics_target?\n\nThanks in advance,\nAJ\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jan 2013 10:29:34 -0500",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Analyze and default_statistics_target"
},
{
"msg_contents": "On 21.01.2013 17:29, AJ Weber wrote:\n> I was under the impression that the default_statistics_target was a\n> percentage of rows to analyze. Maybe this is not the case?\n\nNope.\n\n> I ran an analyze during a \"quiet point\" last night and for a few of my\n> large tables, I didn't get what I consider a reasonable sampling of\n> rows. When running with \"verbose\" enabled, it appeared that a maximum of\n> 240000 rows were being analyzed, including on tables exceeding 4-8mm\n> rows. My default_statistics_target = 80.\n>\n> Shouldn't I be analyzing a larger percentage of these big tables?\n\nAnalyze only needs a fairly small random sample of the rows in the table \nto get a picture of what the data looks like. Compare with e.g opinion \npolls; you only need to sample a few thousand people to get a result \nwith reasonable error bound.\n\nThat's for estimating the histogram. Estimating ndistinct is a different \nstory, and it's well-known that the estimates of ndistinct are sometimes \nwildly wrong.\n\n> What is the unit-of-measure used for default_statistics_target?\n\nIt's the number of entries stored in the histogram and \nmost-common-values list in pg_statistics.\n\nSee also http://www.postgresql.org/docs/devel/static/planner-stats.html:\n\n\"The amount of information stored in pg_statistic by ANALYZE, in \nparticular the maximum number of entries in the most_common_vals and \nhistogram_bounds arrays for each column, can be set on a \ncolumn-by-column basis using the ALTER TABLE SET STATISTICS command, or \nglobally by setting the default_statistics_target configuration \nvariable. The default limit is presently 100 entries.\"\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jan 2013 17:47:09 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze and default_statistics_target"
},
{
"msg_contents": "AJ Weber wrote:\r\n> What is the unit-of-measure used for default_statistics_target?\r\n\r\nNumber of entries in pg_stats.histogram_bounds orpg_stats.most_common_vals.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jan 2013 16:00:07 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Analyze and default_statistics_target"
},
{
"msg_contents": "On Mon, Jan 21, 2013 at 9:05 AM, rudi <[email protected]> wrote:\n> Hello,\n>\n> I'm running postgresl 9.0. After partitioning a big table, CPU usage raised\n> from average 5-10% to average 70-80%.\n>\n> - the table contains about 20.000.000 rows\n> - partitions are selected using a trigger, based on an indexed field, a date\n> (IF date_taken >= x AND date_taken < y)\n> - I created 5 partitions, the 2012 one now contains most of the rows. The\n> 2013 partition is the \"live\" partition, mostly insert, a few select based on\n> the above indexed field. The 2013, 2014, 2015 partitions are empty\n> - constraint execution is on.\n>\n> I have 2 weeks CPU usage reports and the pattern definately changed after I\n> made the partitions. Any idea?\n\nFirst thing that jumps to mind is you have some seq-scan heavy plans\nthat were not seq-scan before. Could be due to query fooling CE\nmechanism or some other CE (probably fixable issue). To diagnose we\nneed to see some explain analyze plans of queries that are using\nhigher than expected cpu usage.\n\nSecond possible cause is trigger overhead from inserts. Not likely to\ncause so much of a jump, but if this is the issue suggested\noptimization path is to insert directly to the partition.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jan 2013 11:13:31 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage after partitioning"
},
{
"msg_contents": "\nOn 01/21/2013 10:05 AM, rudi wrote:\n> Hello,\n>\n> I'm running postgresl 9.0. After partitioning a big table, CPU usage \n> raised from average 5-10% to average 70-80%.\n>\n> - the table contains about 20.000.000 rows\n> - partitions are selected using a trigger, based on an indexed field, \n> a date (IF date_taken >= x AND date_taken < y)\n> - I created 5 partitions, the 2012 one now contains most of the rows. \n> The 2013 partition is the \"live\" partition, mostly insert, a few \n> select based on the above indexed field. The 2013, 2014, 2015 \n> partitions are empty\n> - constraint execution is on.\n> I have 2 weeks CPU usage reports and the pattern definately changed \n> after I made the partitions. Any idea?\n>\n>\n\nWell, the first question that comes to my mind is whether it's the \ninserts that are causing the load or the reads. If it's the inserts then \nyou should show us the whole trigger. Does it by any chance use 'execute'?\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jan 2013 19:41:29 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage after partitioning"
},
{
"msg_contents": "On Tue, Jan 22, 2013 at 1:41 AM, Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 01/21/2013 10:05 AM, rudi wrote:\n>\n>> Hello,\n>>\n>> I'm running postgresl 9.0. After partitioning a big table, CPU usage\n>> raised from average 5-10% to average 70-80%.\n>>\n>> - the table contains about 20.000.000 rows\n>> - partitions are selected using a trigger, based on an indexed field, a\n>> date (IF date_taken >= x AND date_taken < y)\n>> - I created 5 partitions, the 2012 one now contains most of the rows. The\n>> 2013 partition is the \"live\" partition, mostly insert, a few select based\n>> on the above indexed field. The 2013, 2014, 2015 partitions are empty\n>> - constraint execution is on.\n>> I have 2 weeks CPU usage reports and the pattern definately changed after\n>> I made the partitions. Any idea?\n>>\n>>\n>>\n> Well, the first question that comes to my mind is whether it's the inserts\n> that are causing the load or the reads. If it's the inserts then you should\n> show us the whole trigger. Does it by any chance use 'execute'?\n\n\nI think I found the culprit. The insert trigger doesn't seem to be an\nissue. It is a trivial IF-ELSE and inserts seems fast.\n\nIF (NEW.date_taken < DATE '2013-01-01') THEN\n INSERT INTO sb_logs_2012 VALUES (NEW.*);\nELSIF (NEW.date_taken >= DATE '2013-01-01' AND NEW.date_taken < DATE\n'2014-01-01') THEN\n INSERT INTO sb_logs_2013 VALUES (NEW.*);\n[...]\nEND IF;\n\nEvery query has been carefully optimized, child tables are indexed. The\ntable(s) has a UNIQUE index on (\"date_taken\", \"device_id\") and \"date_taken\"\nis the partitioning column (one partition per year).\nThere are few well known access path to this table: INSERTs (40-50.000 each\nday), SELECTs on a specific device_id AND on a specific day.\n\nBUT, I discovered an access path used by a process every few secs. to get\nthe last log for a given device, and this query became really slow after\npartitioning:\n\nResult (cost=341156.04..341182.90 rows=4 width=86) (actual\ntime=1132.326..1132.329 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Aggregate (cost=341156.03..341156.04 rows=1 width=8) (actual\ntime=1132.295..1132.296 rows=1 loops=1)\n -> Append (cost=0.00..341112.60 rows=17371 width=8) (actual\ntime=45.600..1110.057 rows=19016 loops=1)\n -> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=8)\n(actual time=0.000..0.000 rows=0 loops=1)\n Filter: (device_id = 901)\n -> Index Scan using\nsb_logs_2012_on_date_taken_and_device_id on sb_logs_2012 sb_logs\n (cost=0.00..319430.51 rows=16003 width=8) (actual time=45.599..1060.143\nrows=17817 loops=1)\n Index Cond: (device_id = 901)\n -> Index Scan using\nsb_logs_2013_on_date_taken_and_device_id on sb_logs_2013 sb_logs\n (cost=0.00..21663.39 rows=1363 width=8) (actual time=0.022..47.661\nrows=1199 loops=1)\n Index Cond: (device_id = 901)\n -> Bitmap Heap Scan on sb_logs_2014 sb_logs\n (cost=10.25..18.71 rows=4 width=8) (actual time=0.011..0.011 rows=0\nloops=1)\n Recheck Cond: (device_id = 901)\n -> Bitmap Index Scan on\nsb_logs_2014_on_date_taken_and_device_id (cost=0.00..10.25 rows=4 width=0)\n(actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (device_id = 901)\n -> Append (cost=0.00..26.86 rows=4 width=86) (actual\ntime=1132.325..1132.328 rows=1 loops=1)\n -> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=90) (actual\ntime=0.002..0.002 rows=0 loops=1)\n Filter: ((device_id = 901) AND (date_taken = $0))\n -> Index Scan using sb_logs_2012_on_date_taken_and_device_id on\nsb_logs_2012 sb_logs (cost=0.00..10.20 rows=1 width=90) (actual\ntime=1132.314..1132.314 rows=0 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901))\n -> Index Scan using sb_logs_2013_on_date_taken_and_device_id on\nsb_logs_2013 sb_logs (cost=0.00..8.39 rows=1 width=91) (actual\ntime=0.007..0.008 rows=1 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901))\n -> Index Scan using sb_logs_2014_on_date_taken_and_device_id on\nsb_logs_2014 sb_logs (cost=0.00..8.27 rows=1 width=72) (actual\ntime=0.002..0.002 rows=0 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901))\nTotal runtime: 1132.436 ms\n\nI must find a better way to get that information, but I wonder if there\ncould be a better plan. The same query over a table with the same structure\nbut not partitioned gives far better plan:\n\nIndex Scan using index_iv_logs_on_date_taken_and_device_id on iv_logs\n (cost=12.35..21.88 rows=1 width=157) (actual time=0.065..0.066 rows=1\nloops=1)\n Index Cond: ((date_taken = $1) AND (device_id = 1475))\n InitPlan 2 (returns $1)\n -> Result (cost=12.34..12.35 rows=1 width=0) (actual\ntime=0.059..0.059 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..12.34 rows=1 width=8) (actual\ntime=0.055..0.056 rows=1 loops=1)\n -> Index Scan Backward using\nindex_iv_logs_on_date_taken_and_device_id on iv_logs (cost=0.00..261052.53\nrows=21154 width=8) (actual time=0.055..0.055 rows=1 loops=1)\n Index Cond: ((date_taken IS NOT NULL) AND\n(device_id = 1475))\nTotal runtime: 0.110 ms\n\n\n-- \nrd\n\nThis is the way the world ends.\nNot with a bang, but a whimper.\n\nOn Tue, Jan 22, 2013 at 1:41 AM, Andrew Dunstan <[email protected]> wrote:\n\nOn 01/21/2013 10:05 AM, rudi wrote:\n\nHello,\n\nI'm running postgresl 9.0. After partitioning a big table, CPU usage raised from average 5-10% to average 70-80%.\n\n- the table contains about 20.000.000 rows\n- partitions are selected using a trigger, based on an indexed field, a date (IF date_taken >= x AND date_taken < y)\n- I created 5 partitions, the 2012 one now contains most of the rows. The 2013 partition is the \"live\" partition, mostly insert, a few select based on the above indexed field. The 2013, 2014, 2015 partitions are empty\n\n\n- constraint execution is on.\nI have 2 weeks CPU usage reports and the pattern definately changed after I made the partitions. Any idea?\n\n\n\n\nWell, the first question that comes to my mind is whether it's the inserts that are causing the load or the reads. If it's the inserts then you should show us the whole trigger. Does it by any chance use 'execute'?\nI think I found the culprit. The insert trigger doesn't seem to be an issue. It is a trivial IF-ELSE and inserts seems fast.IF (NEW.date_taken < DATE '2013-01-01') THEN\n\n INSERT INTO sb_logs_2012 VALUES (NEW.*);ELSIF (NEW.date_taken >= DATE '2013-01-01' AND NEW.date_taken < DATE '2014-01-01') THEN INSERT INTO sb_logs_2013 VALUES (NEW.*);\n[...]END IF;Every query has been carefully optimized, child tables are indexed. The table(s) has a UNIQUE index on (\"date_taken\", \"device_id\") and \"date_taken\" is the partitioning column (one partition per year).\nThere are few well known access path to this table: INSERTs (40-50.000 each day), SELECTs on a specific device_id AND on a specific day.BUT, I discovered an access path used by a process every few secs. to get the last log for a given device, and this query became really slow after partitioning: \nResult (cost=341156.04..341182.90 rows=4 width=86) (actual time=1132.326..1132.329 rows=1 loops=1) InitPlan 1 (returns $0) -> Aggregate (cost=341156.03..341156.04 rows=1 width=8) (actual time=1132.295..1132.296 rows=1 loops=1)\n -> Append (cost=0.00..341112.60 rows=17371 width=8) (actual time=45.600..1110.057 rows=19016 loops=1) -> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: (device_id = 901) -> Index Scan using sb_logs_2012_on_date_taken_and_device_id on sb_logs_2012 sb_logs (cost=0.00..319430.51 rows=16003 width=8) (actual time=45.599..1060.143 rows=17817 loops=1)\n Index Cond: (device_id = 901) -> Index Scan using sb_logs_2013_on_date_taken_and_device_id on sb_logs_2013 sb_logs (cost=0.00..21663.39 rows=1363 width=8) (actual time=0.022..47.661 rows=1199 loops=1)\n Index Cond: (device_id = 901) -> Bitmap Heap Scan on sb_logs_2014 sb_logs (cost=10.25..18.71 rows=4 width=8) (actual time=0.011..0.011 rows=0 loops=1) Recheck Cond: (device_id = 901)\n -> Bitmap Index Scan on sb_logs_2014_on_date_taken_and_device_id (cost=0.00..10.25 rows=4 width=0) (actual time=0.008..0.008 rows=0 loops=1) Index Cond: (device_id = 901)\n -> Append (cost=0.00..26.86 rows=4 width=86) (actual time=1132.325..1132.328 rows=1 loops=1) -> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=90) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: ((device_id = 901) AND (date_taken = $0)) -> Index Scan using sb_logs_2012_on_date_taken_and_device_id on sb_logs_2012 sb_logs (cost=0.00..10.20 rows=1 width=90) (actual time=1132.314..1132.314 rows=0 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901)) -> Index Scan using sb_logs_2013_on_date_taken_and_device_id on sb_logs_2013 sb_logs (cost=0.00..8.39 rows=1 width=91) (actual time=0.007..0.008 rows=1 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901)) -> Index Scan using sb_logs_2014_on_date_taken_and_device_id on sb_logs_2014 sb_logs (cost=0.00..8.27 rows=1 width=72) (actual time=0.002..0.002 rows=0 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901))Total runtime: 1132.436 msI must find a better way to get that information, but I wonder if there could be a better plan. The same query over a table with the same structure but not partitioned gives far better plan:\nIndex Scan using index_iv_logs_on_date_taken_and_device_id on iv_logs (cost=12.35..21.88 rows=1 width=157) (actual time=0.065..0.066 rows=1 loops=1) Index Cond: ((date_taken = $1) AND (device_id = 1475))\n InitPlan 2 (returns $1) -> Result (cost=12.34..12.35 rows=1 width=0) (actual time=0.059..0.059 rows=1 loops=1) InitPlan 1 (returns $0) -> Limit (cost=0.00..12.34 rows=1 width=8) (actual time=0.055..0.056 rows=1 loops=1)\n -> Index Scan Backward using index_iv_logs_on_date_taken_and_device_id on iv_logs (cost=0.00..261052.53 rows=21154 width=8) (actual time=0.055..0.055 rows=1 loops=1) Index Cond: ((date_taken IS NOT NULL) AND (device_id = 1475))\nTotal runtime: 0.110 ms-- rdThis is the way the world ends.Not with a bang, but a whimper.",
"msg_date": "Tue, 22 Jan 2013 14:34:55 +0100",
"msg_from": "rudi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU usage after partitioning"
},
{
"msg_contents": "On Tue, Jan 22, 2013 at 7:34 AM, rudi <[email protected]> wrote:\n> Every query has been carefully optimized, child tables are indexed. The\n> table(s) has a UNIQUE index on (\"date_taken\", \"device_id\") and \"date_taken\"\n> is the partitioning column (one partition per year).\n> There are few well known access path to this table: INSERTs (40-50.000 each\n> day), SELECTs on a specific device_id AND on a specific day.\n>\n> BUT, I discovered an access path used by a process every few secs. to get\n> the last log for a given device, and this query became really slow after\n> partitioning:\n>\n> Result (cost=341156.04..341182.90 rows=4 width=86) (actual\n> time=1132.326..1132.329 rows=1 loops=1)\n> InitPlan 1 (returns $0)\n> -> Aggregate (cost=341156.03..341156.04 rows=1 width=8) (actual\n> time=1132.295..1132.296 rows=1 loops=1)\n> -> Append (cost=0.00..341112.60 rows=17371 width=8) (actual\n> time=45.600..1110.057 rows=19016 loops=1)\n> -> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=8)\n> (actual time=0.000..0.000 rows=0 loops=1)\n> Filter: (device_id = 901)\n> -> Index Scan using\n> sb_logs_2012_on_date_taken_and_device_id on sb_logs_2012 sb_logs\n> (cost=0.00..319430.51 rows=16003 width=8) (actual time=45.599..1060.143\n> rows=17817 loops=1)\n> Index Cond: (device_id = 901)\n> -> Index Scan using\n> sb_logs_2013_on_date_taken_and_device_id on sb_logs_2013 sb_logs\n> (cost=0.00..21663.39 rows=1363 width=8) (actual time=0.022..47.661 rows=1199\n> loops=1)\n> Index Cond: (device_id = 901)\n> -> Bitmap Heap Scan on sb_logs_2014 sb_logs\n> (cost=10.25..18.71 rows=4 width=8) (actual time=0.011..0.011 rows=0 loops=1)\n> Recheck Cond: (device_id = 901)\n> -> Bitmap Index Scan on\n> sb_logs_2014_on_date_taken_and_device_id (cost=0.00..10.25 rows=4 width=0)\n> (actual time=0.008..0.008 rows=0 loops=1)\n> Index Cond: (device_id = 901)\n> -> Append (cost=0.00..26.86 rows=4 width=86) (actual\n> time=1132.325..1132.328 rows=1 loops=1)\n> -> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=90) (actual\n> time=0.002..0.002 rows=0 loops=1)\n> Filter: ((device_id = 901) AND (date_taken = $0))\n> -> Index Scan using sb_logs_2012_on_date_taken_and_device_id on\n> sb_logs_2012 sb_logs (cost=0.00..10.20 rows=1 width=90) (actual\n> time=1132.314..1132.314 rows=0 loops=1)\n> Index Cond: ((date_taken = $0) AND (device_id = 901))\n> -> Index Scan using sb_logs_2013_on_date_taken_and_device_id on\n> sb_logs_2013 sb_logs (cost=0.00..8.39 rows=1 width=91) (actual\n> time=0.007..0.008 rows=1 loops=1)\n> Index Cond: ((date_taken = $0) AND (device_id = 901))\n> -> Index Scan using sb_logs_2014_on_date_taken_and_device_id on\n> sb_logs_2014 sb_logs (cost=0.00..8.27 rows=1 width=72) (actual\n> time=0.002..0.002 rows=0 loops=1)\n> Index Cond: ((date_taken = $0) AND (device_id = 901))\n> Total runtime: 1132.436 ms\n>\n> I must find a better way to get that information, but I wonder if there\n> could be a better plan. The same query over a table with the same structure\n> but not partitioned gives far better plan:\n>\n> Index Scan using index_iv_logs_on_date_taken_and_device_id on iv_logs\n> (cost=12.35..21.88 rows=1 width=157) (actual time=0.065..0.066 rows=1\n> loops=1)\n> Index Cond: ((date_taken = $1) AND (device_id = 1475))\n> InitPlan 2 (returns $1)\n> -> Result (cost=12.34..12.35 rows=1 width=0) (actual time=0.059..0.059\n> rows=1 loops=1)\n> InitPlan 1 (returns $0)\n> -> Limit (cost=0.00..12.34 rows=1 width=8) (actual\n> time=0.055..0.056 rows=1 loops=1)\n> -> Index Scan Backward using\n> index_iv_logs_on_date_taken_and_device_id on iv_logs (cost=0.00..261052.53\n> rows=21154 width=8) (actual time=0.055..0.055 rows=1 loops=1)\n> Index Cond: ((date_taken IS NOT NULL) AND (device_id\n> = 1475))\n> Total runtime: 0.110 ms\n\nlet's see the query -- it's probably written in such a way so as to\nnot be able to be optimized through CE.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Jan 2013 08:04:39 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage after partitioning"
},
{
"msg_contents": "On Tue, Jan 22, 2013 at 3:04 PM, Merlin Moncure <[email protected]> wrote:\n\n> let's see the query -- it's probably written in such a way so as to\n> not be able to be optimized through CE.\n>\n>\nThe query is pretty simple and standard, the behaviour (and the plan) is\ntotally different when it comes to a partitioned table.\n\nPartioned table query => explain analyze SELECT \"sb_logs\".* FROM \"sb_logs\"\n WHERE (device_id = 901 AND date_taken = (SELECT MAX(date_taken) FROM\nsb_logs WHERE device_id = 901));\nPlain table query => explain analyze SELECT \"iv_logs\".* FROM \"iv_logs\"\n WHERE (device_id = 1475 AND date_taken = (SELECT MAX(date_taken) FROM\niv_logs WHERE device_id = 1475));\n\nsb_logs and iv_logs have identical index structure and similar cardinality\n(about ~12.000.000 rows the first, ~9.000.000 rows the second).\n\nsb_logs PLAN:\n InitPlan 1 (returns $0)\n -> Aggregate (cost=339424.47..339424.48 rows=1 width=8) (actual\ntime=597.742..597.742 rows=1 loops=1)\n -> Append (cost=0.00..339381.68 rows=17114 width=8) (actual\ntime=42.791..594.001 rows=19024 loops=1)\n -> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=8)\n(actual time=0.000..0.000 rows=0 loops=1)\n Filter: (device_id = 901)\n -> Index Scan using\nsb_logs_2012_on_date_taken_and_device_id on sb_logs_2012 sb_logs\n (cost=0.00..319430.51 rows=16003 width=8) (actual time=42.789..559.165\nrows=17817 loops=1)\n Index Cond: (device_id = 901)\n -> Index Scan using\nsb_logs_2013_on_date_taken_and_device_id on sb_logs_2013 sb_logs\n (cost=0.00..19932.46 rows=1106 width=8) (actual time=0.037..31.699\nrows=1207 loops=1)\n Index Cond: (device_id = 901)\n -> Bitmap Heap Scan on sb_logs_2014 sb_logs\n (cost=10.25..18.71 rows=4 width=8) (actual time=0.012..0.012 rows=0\nloops=1)\n Recheck Cond: (device_id = 901)\n -> Bitmap Index Scan on\nsb_logs_2014_on_date_taken_and_device_id (cost=0.00..10.25 rows=4 width=0)\n(actual time=0.010..0.010 rows=0 loops=1)\n Index Cond: (device_id = 901)\n -> Append (cost=0.00..26.86 rows=4 width=86) (actual\ntime=597.808..597.811 rows=1 loops=1)\n -> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=90) (actual\ntime=0.022..0.022 rows=0 loops=1)\n Filter: ((device_id = 901) AND (date_taken = $0))\n -> Index Scan using sb_logs_2012_on_date_taken_and_device_id on\nsb_logs_2012 sb_logs (cost=0.00..10.20 rows=1 width=90) (actual\ntime=597.773..597.773 rows=0 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901))\n -> Index Scan using sb_logs_2013_on_date_taken_and_device_id on\nsb_logs_2013 sb_logs (cost=0.00..8.39 rows=1 width=91) (actual\ntime=0.011..0.011 rows=1 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901))\n -> Index Scan using sb_logs_2014_on_date_taken_and_device_id on\nsb_logs_2014 sb_logs (cost=0.00..8.27 rows=1 width=72) (actual\ntime=0.003..0.003 rows=0 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901))\nTotal runtime: 598.049 ms\n\niv_logs PLAN:\n\nIndex Scan using index_iv_logs_on_date_taken_and_device_id on iv_logs\n (cost=12.35..21.88 rows=1 width=157) (actual time=0.060..0.060 rows=1\nloops=1)\n Index Cond: ((date_taken = $1) AND (device_id = 1475))\n InitPlan 2 (returns $1)\n -> Result (cost=12.34..12.35 rows=1 width=0) (actual\ntime=0.053..0.053 rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..12.34 rows=1 width=8) (actual\ntime=0.050..0.051 rows=1 loops=1)\n -> Index Scan Backward using\nindex_iv_logs_on_date_taken_and_device_id on iv_logs (cost=0.00..261151.32\nrows=21163 width=8) (actual time=0.046..0.046 rows=1 loops=1)\n Index Cond: ((date_taken IS NOT NULL) AND\n(device_id = 1475))\nTotal runtime: 0.101 ms\n\n\n-- \nrd\n\nThis is the way the world ends.\nNot with a bang, but a whimper.\n\nOn Tue, Jan 22, 2013 at 3:04 PM, Merlin Moncure <[email protected]> wrote:\nlet's see the query -- it's probably written in such a way so as to\nnot be able to be optimized through CE.\nThe query is pretty simple and standard, the behaviour (and the plan) is totally different when it comes to a partitioned table.\nPartioned table query => explain analyze SELECT \"sb_logs\".* FROM \"sb_logs\" WHERE (device_id = 901 AND date_taken = (SELECT MAX(date_taken) FROM sb_logs WHERE device_id = 901));\nPlain table query => explain analyze SELECT \"iv_logs\".* FROM \"iv_logs\" WHERE (device_id = 1475 AND date_taken = (SELECT MAX(date_taken) FROM iv_logs WHERE device_id = 1475));\nsb_logs and iv_logs have identical index structure and similar cardinality (about ~12.000.000 rows the first, ~9.000.000 rows the second).sb_logs PLAN: InitPlan 1 (returns $0)\n -> Aggregate (cost=339424.47..339424.48 rows=1 width=8) (actual time=597.742..597.742 rows=1 loops=1) -> Append (cost=0.00..339381.68 rows=17114 width=8) (actual time=42.791..594.001 rows=19024 loops=1)\n -> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=1) Filter: (device_id = 901) -> Index Scan using sb_logs_2012_on_date_taken_and_device_id on sb_logs_2012 sb_logs (cost=0.00..319430.51 rows=16003 width=8) (actual time=42.789..559.165 rows=17817 loops=1)\n Index Cond: (device_id = 901) -> Index Scan using sb_logs_2013_on_date_taken_and_device_id on sb_logs_2013 sb_logs (cost=0.00..19932.46 rows=1106 width=8) (actual time=0.037..31.699 rows=1207 loops=1)\n Index Cond: (device_id = 901) -> Bitmap Heap Scan on sb_logs_2014 sb_logs (cost=10.25..18.71 rows=4 width=8) (actual time=0.012..0.012 rows=0 loops=1) Recheck Cond: (device_id = 901)\n -> Bitmap Index Scan on sb_logs_2014_on_date_taken_and_device_id (cost=0.00..10.25 rows=4 width=0) (actual time=0.010..0.010 rows=0 loops=1) Index Cond: (device_id = 901)\n -> Append (cost=0.00..26.86 rows=4 width=86) (actual time=597.808..597.811 rows=1 loops=1) -> Seq Scan on sb_logs (cost=0.00..0.00 rows=1 width=90) (actual time=0.022..0.022 rows=0 loops=1)\n Filter: ((device_id = 901) AND (date_taken = $0)) -> Index Scan using sb_logs_2012_on_date_taken_and_device_id on sb_logs_2012 sb_logs (cost=0.00..10.20 rows=1 width=90) (actual time=597.773..597.773 rows=0 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901)) -> Index Scan using sb_logs_2013_on_date_taken_and_device_id on sb_logs_2013 sb_logs (cost=0.00..8.39 rows=1 width=91) (actual time=0.011..0.011 rows=1 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901)) -> Index Scan using sb_logs_2014_on_date_taken_and_device_id on sb_logs_2014 sb_logs (cost=0.00..8.27 rows=1 width=72) (actual time=0.003..0.003 rows=0 loops=1)\n Index Cond: ((date_taken = $0) AND (device_id = 901))Total runtime: 598.049 msiv_logs PLAN:Index Scan using index_iv_logs_on_date_taken_and_device_id on iv_logs (cost=12.35..21.88 rows=1 width=157) (actual time=0.060..0.060 rows=1 loops=1)\n Index Cond: ((date_taken = $1) AND (device_id = 1475)) InitPlan 2 (returns $1) -> Result (cost=12.34..12.35 rows=1 width=0) (actual time=0.053..0.053 rows=1 loops=1) InitPlan 1 (returns $0)\n -> Limit (cost=0.00..12.34 rows=1 width=8) (actual time=0.050..0.051 rows=1 loops=1) -> Index Scan Backward using index_iv_logs_on_date_taken_and_device_id on iv_logs (cost=0.00..261151.32 rows=21163 width=8) (actual time=0.046..0.046 rows=1 loops=1)\n Index Cond: ((date_taken IS NOT NULL) AND (device_id = 1475))Total runtime: 0.101 ms-- rdThis is the way the world ends.Not with a bang, but a whimper.",
"msg_date": "Tue, 22 Jan 2013 15:21:56 +0100",
"msg_from": "rudi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU usage after partitioning"
},
{
"msg_contents": "\nOn 01/22/2013 09:21 AM, rudi wrote:\n> On Tue, Jan 22, 2013 at 3:04 PM, Merlin Moncure <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> let's see the query -- it's probably written in such a way so as to\n> not be able to be optimized through CE.\n>\n>\n> The query is pretty simple and standard, the behaviour (and the plan) \n> is totally different when it comes to a partitioned table.\n>\n> Partioned table query => explain analyze SELECT \"sb_logs\".* FROM \n> \"sb_logs\" WHERE (device_id = 901 AND date_taken = (SELECT \n> MAX(date_taken) FROM sb_logs WHERE device_id = 901));\n>\n\nAnd there you have it. Constraint exclusion does not work in cases like \nthis. It only works with static expressions (such as a literal date in \nthis case).\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Jan 2013 09:46:24 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage after partitioning"
},
{
"msg_contents": "On Tue, Jan 22, 2013 at 3:46 PM, Andrew Dunstan <[email protected]> wrote:\n\n> The query is pretty simple and standard, the behaviour (and the plan) is\n> totally different when it comes to a partitioned table.\n>\n>>\n>> Partioned table query => explain analyze SELECT \"sb_logs\".* FROM\n>> \"sb_logs\" WHERE (device_id = 901 AND date_taken = (SELECT MAX(date_taken)\n>> FROM sb_logs WHERE device_id = 901));\n>>\n>>\n> And there you have it. Constraint exclusion does not work in cases like\n> this. It only works with static expressions (such as a literal date in this\n> case).\n\n\nOk, but I would have expected same plant repeated 4 times. When the table\nis not partitioned, the plan is defintely smarter: it knows that index is\nreversed and looks for max with an index scan backward). When the table is\npartitioned, it scan forward and I guess it will always do a full index\nscan.\n\n\n\n-- \nrd\n\nThis is the way the world ends.\nNot with a bang, but a whimper.\n\nOn Tue, Jan 22, 2013 at 3:46 PM, Andrew Dunstan <[email protected]> wrote:\nThe query is pretty simple and standard, the behaviour (and the plan) is totally different when it comes to a partitioned table.\n\nPartioned table query => explain analyze SELECT \"sb_logs\".* FROM \"sb_logs\" WHERE (device_id = 901 AND date_taken = (SELECT MAX(date_taken) FROM sb_logs WHERE device_id = 901));\n\n\n\nAnd there you have it. Constraint exclusion does not work in cases like this. It only works with static expressions (such as a literal date in this case).Ok, but I would have expected same plant repeated 4 times. When the table is not partitioned, the plan is defintely smarter: it knows that index is reversed and looks for max with an index scan backward). When the table is partitioned, it scan forward and I guess it will always do a full index scan. \n-- rdThis is the way the world ends.Not with a bang, but a whimper.",
"msg_date": "Tue, 22 Jan 2013 16:08:09 +0100",
"msg_from": "rudi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU usage after partitioning"
},
{
"msg_contents": "In PG 9.2 I’m getting “Index Only Scan Backward” for every partition in the first part of execution plan, when looking for MAX in partitioned table on a similar query:\r\n\r\n\" -> Index Only Scan Backward using pk_cycle_200610 on gp_cycle_200610 gp_cycle (cost=0.00..8.34 rows=5 width=8) (actual time=0.021..0.021 rows=1 loops=1)\"\r\n\" Index Cond: (cycle_date_time IS NOT NULL)\"\r\n\" Heap Fetches: 0\"\r\n\r\nMay be you should upgrade to 9.2.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\r\nFrom: rudi [mailto:[email protected]]\r\nSent: Tuesday, January 22, 2013 10:08 AM\r\nTo: [email protected]\r\nSubject: Re: High CPU usage after partitioning\r\n\r\nOn Tue, Jan 22, 2013 at 3:46 PM, Andrew Dunstan <[email protected]<mailto:[email protected]>> wrote:\r\nThe query is pretty simple and standard, the behaviour (and the plan) is totally different when it comes to a partitioned table.\r\n\r\nPartioned table query => explain analyze SELECT \"sb_logs\".* FROM \"sb_logs\" WHERE (device_id = 901 AND date_taken = (SELECT MAX(date_taken) FROM sb_logs WHERE device_id = 901));\r\n\r\nAnd there you have it. Constraint exclusion does not work in cases like this. It only works with static expressions (such as a literal date in this case).\r\n\r\nOk, but I would have expected same plant repeated 4 times. When the table is not partitioned, the plan is defintely smarter: it knows that index is reversed and looks for max with an index scan backward). When the table is partitioned, it scan forward and I guess it will always do a full index scan.\r\n\r\n\r\n\r\n--\r\nrd\r\n\r\nThis is the way the world ends.\r\nNot with a bang, but a whimper.\r\n\n\n\n\n\n\n\n\n\nIn PG 9.2 I’m getting “Index Only Scan Backward” for every partition in the first part of execution plan, when looking for MAX in partitioned table on a similar\r\n query:\n \n\" -> Index Only Scan Backward using pk_cycle_200610 on gp_cycle_200610 gp_cycle (cost=0.00..8.34 rows=5 width=8) (actual time=0.021..0.021\r\n rows=1 loops=1)\"\n\" Index Cond: (cycle_date_time IS NOT NULL)\"\n\" Heap Fetches: 0\"\n \nMay be you should upgrade to 9.2.\n \nRegards,\nIgor Neyman\n \n \n\n\n\nFrom: rudi [mailto:[email protected]]\r\n\nSent: Tuesday, January 22, 2013 10:08 AM\nTo: [email protected]\nSubject: Re: High CPU usage after partitioning\n\n\n \nOn Tue, Jan 22, 2013 at 3:46 PM, Andrew Dunstan <[email protected]> wrote:\n\n\nThe query is pretty simple and standard, the behaviour (and the plan) is totally different when it comes to a partitioned table.\n\n\n\r\nPartioned table query => explain analyze SELECT \"sb_logs\".* FROM \"sb_logs\" WHERE (device_id = 901 AND date_taken = (SELECT MAX(date_taken) FROM sb_logs WHERE device_id = 901));\n\n \n\nAnd there you have it. Constraint exclusion does not work in cases like this. It only works with static expressions (such as a literal date in this case).\n\n\n \n\n\nOk, but I would have expected same plant repeated 4 times. When the table is not partitioned, the plan is defintely smarter: it knows that index is reversed and looks for max with an index scan backward). When the table is partitioned,\r\n it scan forward and I guess it will always do a full index scan. \n\n\n\n\n\n\n \n\n-- \r\nrd\n\r\nThis is the way the world ends.\r\nNot with a bang, but a whimper.",
"msg_date": "Tue, 22 Jan 2013 15:42:46 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage after partitioning"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 01/22/2013 09:21 AM, rudi wrote:\n>> The query is pretty simple and standard, the behaviour (and the plan) \n>> is totally different when it comes to a partitioned table.\n>> \n>> Partioned table query => explain analyze SELECT \"sb_logs\".* FROM \n>> \"sb_logs\" WHERE (device_id = 901 AND date_taken = (SELECT \n>> MAX(date_taken) FROM sb_logs WHERE device_id = 901));\n\n> And there you have it. Constraint exclusion does not work in cases like \n> this. It only works with static expressions (such as a literal date in \n> this case).\n\nThis isn't about constraint exclusion I think. The main problem is in\nthe sub-select: 9.0 isn't able to index-optimize a MAX() across a\npartitioned table, for lack of MergeAppend, so you end up scanning lots\nof rows there. 9.1 or 9.2 should be better.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Jan 2013 13:38:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage after partitioning"
}
] |
[
{
"msg_contents": "Merlin Moncure wrote:\n\n>> I'm running postgresl 9.0. After partitioning a big table, CPU\n>> usage raised from average 5-10% to average 70-80%.\n\n> First thing that jumps to mind is you have some seq-scan heavy\n> plans that were not seq-scan before.\n\nMake sure that all indexes are defined for each partition. It is\nnot enough to define them on just the parent level.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 21 Jan 2013 16:12:14 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU usage after partitioning"
}
] |
[
{
"msg_contents": "Greetings.\n\nI've been playing with a small query that I've been asked to optimize\nand noticed a strange (for me) effect.\nQuery uses this table:\n\n Table \"clc06_tiles\"\n Column | Type |\nModifiers\n------------+-----------------------+-----------------------------------------------------------\n geometry | geometry |\n code_06 | character varying(3) |\n gid | bigint | not null default\nnextval('clc06_tiles_gid_seq'::regclass)\nIndexes:\n \"clc06_tiles_pkey\" PRIMARY KEY, btree (gid)\n \"i_clc06_tiles_geometry\" gist (geometry)\nCheck constraints:\n \"enforce_dims_geom\" CHECK (st_ndims(geometry) = 2)\n \"enforce_geotype_geom\" CHECK (geometrytype(geometry) =\n'MULTIPOLYGON'::text OR geometrytype(geometry) = 'POLYGON'::text OR\ngeometry IS NULL)\n \"enforce_srid_geom\" CHECK (st_srid(geometry) = 3035)\n\nand this function:\nCREATE OR REPLACE FUNCTION my_trans(x1 float8, y1 float8, x2 float8,\ny2 float8) RETURNS geometry AS $my_trans$\n SELECT st_Transform(\n st_GeomFromText('LINESTRING('||x1::text||' '||y1::text||\n ', '||x2::text||' '||y2::text||')',4326),3035);\n$my_trans$ LANGUAGE sql IMMUTABLE STRICT;\n\nand these constants:\n\\set x1 4.56\n\\set y1 52.54\n\\set x2 5.08\n\\set y2 53.34\n\n\nOriginal query looks like this ( http://explain.depesz.com/s/pzv ):\n\nSELECT n, i.*, st_NumGeometries(i.geom)\n FROM (\n SELECT a.code_06 as code_06,\n st_Multi(st_Intersection(a.geometry,\nmy_trans(:x1,:y1,:x2,:y2))) as geom\n FROM clc06_tiles a\n WHERE st_Intersects(a.geometry, my_trans(:x1,:y1,:x2,:y2))) i\n JOIN generate_series(1,10) n ON n <= st_NumGeometries(i.geom);\n\n\nAfter a while I added row_number() to the inner part (\nhttp://explain.depesz.com/s/hfs ):\n\nSELECT n, i.*, st_NumGeometries(i.geom)\n FROM (\n SELECT row_number() OVER () AS rn, a.code_06 as code_06,\n st_Multi(st_Intersection(a.geometry,\nmy_trans(:x1,:y1,:x2,:y2))) as geom\n FROM clc06_tiles a\n WHERE st_Intersects(a.geometry, my_trans(:x1,:y1,:x2,:y2))) i\n JOIN generate_series(1,10) n ON n <= st_NumGeometries(i.geom);\n\n\nIt was really surprising to see a \"side\" effect of 8x performance boost.\nThe only difference I can see is an extra WindowAgg step in the second variant.\n\n\n\nCould you kindly explain how WindowAgg node affects the overall\nperformance, please?\n\n\n\nPostgreSQL 9.2.1 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1, 64-bit\n archive_command | (disabled) | configuration file\n bgwriter_delay | 100ms | configuration file\n bgwriter_lru_maxpages | 200 | configuration file\n checkpoint_segments | 30 | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n effective_cache_size | 3GB | configuration file\n listen_addresses | * | configuration file\n log_checkpoints | on | configuration file\n log_connections | on | configuration file\n log_destination | csvlog | configuration file\n log_disconnections | on | configuration file\n log_lock_waits | on | configuration file\n log_min_duration_statement | 100ms | configuration file\n log_rotation_age | 1d | configuration file\n log_temp_files | 20MB | configuration file\n log_timezone | UTC | configuration file\n logging_collector | on | configuration file\n maintenance_work_mem | 64MB | configuration file\n max_connections | 100 | configuration file\n max_stack_depth | 2MB | environment variable\n max_wal_senders | 2 | configuration file\n port | 5432 | configuration file\n shared_buffers | 768MB | configuration file\n temp_buffers | 32MB | configuration file\n TimeZone | UTC | configuration file\n wal_level | hot_standby | configuration file\n work_mem | 8MB | configuration file\n\n\n-- \nVictor Y. Yegorov\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 22 Jan 2013 22:57:50 +0200",
"msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGAINCV0LPQvtGA0L7Qsg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Effect of the WindowAgg on the Nested Loop"
},
{
"msg_contents": "On Tue, Jan 22, 2013 at 3:57 PM, Виктор Егоров <[email protected]> wrote:\n> Greetings.\n>\n> I've been playing with a small query that I've been asked to optimize\n> and noticed a strange (for me) effect.\n> Query uses this table:\n>\n> Table \"clc06_tiles\"\n> Column | Type |\n> Modifiers\n> ------------+-----------------------+-----------------------------------------------------------\n> geometry | geometry |\n> code_06 | character varying(3) |\n> gid | bigint | not null default\n> nextval('clc06_tiles_gid_seq'::regclass)\n> Indexes:\n> \"clc06_tiles_pkey\" PRIMARY KEY, btree (gid)\n> \"i_clc06_tiles_geometry\" gist (geometry)\n> Check constraints:\n> \"enforce_dims_geom\" CHECK (st_ndims(geometry) = 2)\n> \"enforce_geotype_geom\" CHECK (geometrytype(geometry) =\n> 'MULTIPOLYGON'::text OR geometrytype(geometry) = 'POLYGON'::text OR\n> geometry IS NULL)\n> \"enforce_srid_geom\" CHECK (st_srid(geometry) = 3035)\n>\n> and this function:\n> CREATE OR REPLACE FUNCTION my_trans(x1 float8, y1 float8, x2 float8,\n> y2 float8) RETURNS geometry AS $my_trans$\n> SELECT st_Transform(\n> st_GeomFromText('LINESTRING('||x1::text||' '||y1::text||\n> ', '||x2::text||' '||y2::text||')',4326),3035);\n> $my_trans$ LANGUAGE sql IMMUTABLE STRICT;\n>\n> and these constants:\n> \\set x1 4.56\n> \\set y1 52.54\n> \\set x2 5.08\n> \\set y2 53.34\n>\n>\n> Original query looks like this ( http://explain.depesz.com/s/pzv ):\n>\n> SELECT n, i.*, st_NumGeometries(i.geom)\n> FROM (\n> SELECT a.code_06 as code_06,\n> st_Multi(st_Intersection(a.geometry,\n> my_trans(:x1,:y1,:x2,:y2))) as geom\n> FROM clc06_tiles a\n> WHERE st_Intersects(a.geometry, my_trans(:x1,:y1,:x2,:y2))) i\n> JOIN generate_series(1,10) n ON n <= st_NumGeometries(i.geom);\n>\n>\n> After a while I added row_number() to the inner part (\n> http://explain.depesz.com/s/hfs ):\n>\n> SELECT n, i.*, st_NumGeometries(i.geom)\n> FROM (\n> SELECT row_number() OVER () AS rn, a.code_06 as code_06,\n> st_Multi(st_Intersection(a.geometry,\n> my_trans(:x1,:y1,:x2,:y2))) as geom\n> FROM clc06_tiles a\n> WHERE st_Intersects(a.geometry, my_trans(:x1,:y1,:x2,:y2))) i\n> JOIN generate_series(1,10) n ON n <= st_NumGeometries(i.geom);\n>\n>\n> It was really surprising to see a \"side\" effect of 8x performance boost.\n> The only difference I can see is an extra WindowAgg step in the second variant.\n>\n> Could you kindly explain how WindowAgg node affects the overall\n> performance, please?\n\nApologies for resurrecting an old thread, but I just came across this\npost while doing some research and I don't see any responses.\n\nThis seems like a mighty interesting example. I'm not sure what's\ngoing on here, but let me guess. I think that the WindowAgg is\nforcing some operation - detoasting, maybe? - to happen under the\nmaterialize node. As a result, it only gets done once. But in the\nother plan, the detoast happens at the nested loop level, above the\nmaterialize node, and therefore it happens 10x instead of 1x.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 May 2013 14:30:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Effect of the WindowAgg on the Nested Loop"
},
{
"msg_contents": "2013/5/15 Robert Haas <[email protected]>\n\n> > Original query looks like this ( http://explain.depesz.com/s/pzv ):\n> >\n> > After a while I added row_number() to the inner part (\n> > http://explain.depesz.com/s/hfs ):\n> >\n> > It was really surprising to see a \"side\" effect of 8x performance boost.\n> > The only difference I can see is an extra WindowAgg step in the second\n> variant.\n>\n> Apologies for resurrecting an old thread, but I just came across this\n> post while doing some research and I don't see any responses.\n>\n> This seems like a mighty interesting example. I'm not sure what's\n> going on here, but let me guess. I think that the WindowAgg is\n> forcing some operation - detoasting, maybe? - to happen under the\n> materialize node. As a result, it only gets done once. But in the\n> other plan, the detoast happens at the nested loop level, above the\n> materialize node, and therefore it happens 10x instead of 1x.\n>\n\nI was playing with the query a while ago and put it aside since then,\nneed time to come back to this thing.\n\nI will try to put together a testcase for this example, I'd like to achieve\nthe same behavior on a non-GIS data set.\n\n-- \nVictor Y. Yegorov\n\n2013/5/15 Robert Haas <[email protected]>\n> Original query looks like this ( http://explain.depesz.com/s/pzv ):>\n> After a while I added row_number() to the inner part (\n> http://explain.depesz.com/s/hfs ):\n>\n> It was really surprising to see a \"side\" effect of 8x performance boost.\n> The only difference I can see is an extra WindowAgg step in the second variant.\nApologies for resurrecting an old thread, but I just came across this\npost while doing some research and I don't see any responses.\n\nThis seems like a mighty interesting example. I'm not sure what's\ngoing on here, but let me guess. I think that the WindowAgg is\nforcing some operation - detoasting, maybe? - to happen under the\nmaterialize node. As a result, it only gets done once. But in the\nother plan, the detoast happens at the nested loop level, above the\nmaterialize node, and therefore it happens 10x instead of 1x.I was playing with the query a while ago and put it aside since then,\nneed time to come back to this thing.I will try to put together a testcase for this example, I'd like to achievethe same behavior on a non-GIS data set.\n-- Victor Y. Yegorov",
"msg_date": "Wed, 15 May 2013 23:20:43 +0300",
"msg_from": "Victor Yegorov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Effect of the WindowAgg on the Nested Loop"
}
] |
[
{
"msg_contents": "AJ Weber wrote:\n\n> Is it possible that some spikes in IO could be attributable to\n> the autovacuum process? Is there a way to check this theory?\n\nTaking a look at the ps aux listing, pg_stat_activity, and pg_locks\nshould help establish a cause, or at least rule out a number of\npossibilities. There is a known issue with autovacuum when it tries\nto reduce the size of a table which is found to be larger than it\ncurrently needs to be while other transactions try to access the\ntable. This issue will be fixed in the next minor release for 9.0\nand above. If this is the issue a manual VACUUM ANALYZE will fix\nthings -- at least for a while.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 23 Jan 2013 17:17:00 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum fringe case?"
}
] |
[
{
"msg_contents": "Hi,\n\nI am creating benchmark for postgresql database with tpc-w specification.\nI expect proper benchmark for postgresql with tpc-w specification.\n\npostgresql version:-\npsql (8.4.13, server 9.2.1)\n\ni installed postgresql using source code.\nport is 5432\nno other changes made to configuration.\n\noperating system centos 6.2 x64\n\nconnecting to postgres using psql.\n--------------------------------------------------------------\nWhat i have done:-\ni have written script for DDL's and commands to load data into table in\ntpcw_load.js.\n\nWhat i want to do:-\nnow i want to write tpcw.js with DML's according to specification given for\ntpc-w benchmark .\nwhich give load on database and calculate transactions per second.\n\nIf any one has DML's according to specification given for tpc-w benchmark\nand related data/information.\n\nplease reply.\n\nThanks,\nSachin Kotwal,\[email protected]\n\nHi,I am creating benchmark for postgresql database with tpc-w specification.I expect proper benchmark for postgresql with tpc-w specification.postgresql version:-\npsql (8.4.13, server 9.2.1)i installed postgresql using source code.port is 5432no other changes made to configuration.operating system centos 6.2 x64\nconnecting to postgres using psql.--------------------------------------------------------------What i have done:-i have written script for DDL's and commands to load data into table in tpcw_load.js.\nWhat i want to do:-now i want to write tpcw.js with DML's according to specification given for tpc-w benchmark .which give load on database and calculate transactions per second.\nIf any one has DML's according to specification given for tpc-w benchmark and related data/information.please reply.Thanks,Sachin Kotwal,\[email protected]",
"msg_date": "Fri, 25 Jan 2013 12:08:32 +0530",
"msg_from": "\"Sachin D. Kotwal\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "DML's to tpcw benchmark for postgresql"
}
] |
[
{
"msg_contents": "Hi,\n\nLast weekend, we upgrade a PG from 8.4 to 9.2 version (full \npg_dump/restore/vacuum/analyze). After this, some simple join querys\nbecame very slow, maybe because the use of nested loops. Bellow, an \nexample with nestedloop on and off:\n\nbase=# analyze verbose pc13t;\nINFO: analyzing \"public.pc13t\"\nINFO: \"pc13t\": scanned 30000 of 212313 pages, containing 268677 live \nrows and 764 dead rows; 30000 rows in sample, 1903065 estimated total rows\nANALYZE\nbase=# analyze verbose pc13t3;\nINFO: analyzing \"public.pc13t3\"\nINFO: \"pc13t3\": scanned 30000 of 883424 pages, containing 1274216 live \nrows and 3005 dead rows; 30000 rows in sample, 37553206 estimated total rows\nANALYZE\nbase=# set enable_nestloop = on;\nSET\nbase=# explain (analyze,buffers) SELECT T1.Pc13Item, T1.PC13CodPed, \nT1.PC13AnoPed, T1.PC13Emp08P, T1.PC13Cor, T1.PC13Codigo, T1.PC13Emp08, \nT1.PC13ProEst AS PC13ProEst, T2.PC13Grade, T4.PC07GerPed, T2.PC13EmpGra, \nT1.PC13Emp06P AS PC13Emp06P, T4.PC07NaoTV, T3.co13PorTam, T4.PC07CodAlm, \nT4.PC07C_Cust AS PC13SecIns, T3.co13Bloq AS PC13ProBlq, T1.PC13InsEst AS \nPC13InsEst, T1.PC13TipIn2 AS PC13TipIn2, T1.PC13EmpIns AS PC13EmpIns \nFROM (((PC13T3 T1 LEFT JOIN PC13T T2 ON T2.PC13Emp08 = T1.PC13Emp08 AND \nT2.PC13Codigo = T1.PC13Codigo AND T2.PC13Cor = T1.PC13Cor AND \nT2.PC13Emp08P = T1.PC13Emp08P AND T2.PC13AnoPed = T1.PC13AnoPed AND \nT2.PC13CodPed = T1.PC13CodPed AND T2.Pc13Item = T1.Pc13Item) LEFT JOIN \nCO13T T3 ON T3.co13Emp06 = T1.PC13Emp06P AND T3.co13CodPro = \nT1.PC13ProEst) LEFT JOIN PC07T T4 ON T4.PC07CodEmp = T1.PC13EmpIns AND \nT4.PC07Tipo = T1.PC13TipIn2 AND T4.PC07Codigo = T1.PC13InsEst) WHERE \nT1.PC13Emp08 = '1' and T1.PC13Codigo = E'9487-491C ' and \nT1.PC13Cor = '1' and T1.PC13Emp08P = '0' and T1.PC13AnoPed = '0' and \nT1.PC13CodPed = '0' and T1.Pc13Item = '0' ORDER BY T1.PC13Emp08, \nT1.PC13Codigo, T1.PC13Cor, T1.PC13Emp08P, T1.PC13AnoPed, T1.PC13CodPed, \nT1.Pc13Item, T1.PC13EmpIns, T1.PC13TipIn2, T1.PC13InsEst ;\n \n QUERY PLAN \n \n\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------\n Nested Loop Left Join (cost=0.00..20.61 rows=1 width=86) (actual \ntime=0.870..1197.057 rows=39 loops=1)\n Buffers: shared hit=424602 read=5\n -> Nested Loop Left Join (cost=0.00..16.33 rows=1 width=74) \n(actual time=0.822..1196.583 rows=39 loops=1)\n Buffers: shared hit=424484 read=5\n -> Nested Loop Left Join (cost=0.00..12.01 rows=1 width=70) \n(actual time=0.779..1195.477 rows=39 loops=1)\n Join Filter: ((t2.pc13emp08 = t1.pc13emp08) AND \n(t2.pc13codigo = t1.pc13codigo) AND (t2.pc13cor = t1.pc13cor) AND \n(t2.pc13emp08p = t1.pc13emp08p) AND (t2.pc13anoped = t1.pc13anoped) AND \n(t2.pc13codped = t1.pc13codped) AND\n(t2.pc13item = t1.pc13item))\n Buffers: shared hit=424329 read=5\n -> Index Scan using ad_pc13t3_modpadrao on pc13t3 t1 \n(cost=0.00..6.21 rows=1 width=65) (actual time=0.090..0.252 rows=39 loops=1)\n Index Cond: ((pc13emp08p = 0) AND (pc13anoped = 0) \nAND (pc13codped = 0) AND (pc13codigo = '9487-491C '::bpchar) \nAND (pc13cor = 1) AND (pc13emp08 = 1))\n Buffers: shared hit=5 read=5\n -> Index Scan using xpc13tc on pc13t t2 \n(cost=0.00..5.77 rows=1 width=44) (actual time=0.254..30.638 rows=1 \nloops=39)\n Index Cond: ((pc13emp08p = 0) AND (pc13anoped = 0) \nAND (pc13codped = 0) AND (pc13item = 0))\n Filter: ((pc13emp08 = 1) AND (pc13codigo = \n'9487-491C '::bpchar) AND (pc13cor = 1))\n Rows Removed by Filter: 45161\n Buffers: shared hit=424324\n -> Index Scan using co13t_pkey on co13t t3 (cost=0.00..4.31 \nrows=1 width=10) (actual time=0.021..0.024 rows=1 loops=39)\n Index Cond: ((co13emp06 = t1.pc13emp06p) AND (co13codpro \n= t1.pc13proest))\n Buffers: shared hit=155\n -> Index Scan using pc07t_pkey on pc07t t4 (cost=0.00..4.28 rows=1 \nwidth=24) (actual time=0.008..0.009 rows=1 loops=39)\n Index Cond: ((pc07codemp = t1.pc13empins) AND (pc07tipo = \nt1.pc13tipin2) AND (pc07codigo = t1.pc13insest))\n Buffers: shared hit=118\n Total runtime: 1197.332 ms\n(22 rows)\n\n#######################################################\n\nbase=# set enable_nestloop = off;\nSET\nbase=# explain (analyze,buffers) SELECT T1.Pc13Item, T1.PC13CodPed, \nT1.PC13AnoPed, T1.PC13Emp08P, T1.PC13Cor, T1.PC13Codigo, T1.PC13Emp08, \nT1.PC13ProEst AS PC13ProEst, T2.PC13Grade, T4.PC07GerPed, T2.PC13EmpGra, \nT1.PC13Emp06P AS PC13Emp06P, T4.PC07NaoTV, T3.co13PorTam, T4.PC07CodAlm, \nT4.PC07C_Cust AS PC13SecIns, T3.co13Bloq AS PC13ProBlq, T1.PC13InsEst AS \nPC13InsEst, T1.PC13TipIn2 AS PC13TipIn2, T1.PC13EmpIns AS PC13EmpIns \nFROM (((PC13T3 T1 LEFT JOIN PC13T T2 ON T2.PC13Emp08 = T1.PC13Emp08 AND \nT2.PC13Codigo = T1.PC13Codigo AND T2.PC13Cor = T1.PC13Cor AND \nT2.PC13Emp08P = T1.PC13Emp08P AND T2.PC13AnoPed = T1.PC13AnoPed AND \nT2.PC13CodPed = T1.PC13CodPed AND T2.Pc13Item = T1.Pc13Item) LEFT JOIN \nCO13T T3 ON T3.co13Emp06 = T1.PC13Emp06P AND T3.co13CodPro = \nT1.PC13ProEst) LEFT JOIN PC07T T4 ON T4.PC07CodEmp = T1.PC13EmpIns AND \nT4.PC07Tipo = T1.PC13TipIn2 AND T4.PC07Codigo = T1.PC13InsEst) WHERE \nT1.PC13Emp08 = '1' and T1.PC13Codigo = E'9487-491C ' and \nT1.PC13Cor = '1' and T1.PC13Emp08P = '0' and T1.PC13AnoPed = '0' and \nT1.PC13CodPed = '0' and T1.Pc13Item = '0' ORDER BY T1.PC13Emp08, \nT1.PC13Codigo, T1.PC13Cor, T1.PC13Emp08P, T1.PC13AnoPed, T1.PC13CodPed, \nT1.Pc13Item, T1.PC13EmpIns, T1.PC13TipIn2, T1.PC13InsEst ;\n \n QUERY PLAN \n \n\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------\n Sort (cost=7782.21..7782.21 rows=1 width=86) (actual \ntime=108.672..108.677 rows=39 loops=1)\n Sort Key: t1.pc13empins, t1.pc13tipin2, t1.pc13insest\n Sort Method: quicksort Memory: 30kB\n Buffers: shared hit=15960 read=2050\n -> Hash Left Join (cost=154.42..7782.20 rows=1 width=86) (actual \ntime=49.398..108.413 rows=39 loops=1)\n Hash Cond: ((t1.pc13emp08 = t2.pc13emp08) AND (t1.pc13codigo = \nt2.pc13codigo) AND (t1.pc13cor = t2.pc13cor) AND (t1.pc13emp08p = \nt2.pc13emp08p) AND (t1.pc13anoped = t2.pc13anoped) AND (t1.pc13codped = \nt2.pc13codped) AND (t1.pc13\nitem = t2.pc13item))\n Buffers: shared hit=15960 read=2050\n -> Hash Right Join (cost=148.62..7776.37 rows=1 width=81) \n(actual time=6.392..65.317 rows=39 loops=1)\n Hash Cond: ((t3.co13emp06 = t1.pc13emp06p) AND \n(t3.co13codpro = t1.pc13proest))\n Buffers: shared hit=5073 read=2050\n -> Seq Scan on co13t t3 (cost=0.00..7371.56 rows=34156 \nwidth=10) (actual time=0.009..41.550 rows=34156 loops=1)\n Buffers: shared hit=5043 read=1987\n -> Hash (cost=148.61..148.61 rows=1 width=77) (actual \ntime=2.514..2.514 rows=39 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 5kB\n Buffers: shared hit=27 read=63\n -> Hash Right Join (cost=6.23..148.61 rows=1 \nwidth=77) (actual time=0.141..2.484 rows=39 loops=1)\n Hash Cond: ((t4.pc07codemp = t1.pc13empins) \nAND (t4.pc07tipo = t1.pc13tipin2) AND (t4.pc07codigo = t1.pc13insest))\n Buffers: shared hit=27 read=63\n -> Seq Scan on pc07t t4 (cost=0.00..109.88 \nrows=2888 width=24) (actual time=0.002..1.171 rows=2888 loops=1)\n Buffers: shared hit=18 read=63\n -> Hash (cost=6.21..6.21 rows=1 width=65) \n(actual time=0.123..0.123 rows=39 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 4kB\n Buffers: shared hit=9\n -> Index Scan using \nad_pc13t3_modpadrao on pc13t3 t1 (cost=0.00..6.21 rows=1 width=65) \n(actual time=0.041..0.090 rows=39 loops=1)\n Index Cond: ((pc13emp08p = 0) \nAND (pc13anoped = 0) AND (pc13codped = 0) AND (pc13codigo = '9487-491C \n '::bpchar) AND (pc13cor = 1) AND (pc13emp08 = 1))\n Buffers: shared hit=9\n -> Hash (cost=5.77..5.77 rows=1 width=44) (actual \ntime=42.937..42.937 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n Buffers: shared hit=10880\n -> Index Scan using xpc13tc on pc13t t2 \n(cost=0.00..5.77 rows=1 width=44) (actual time=0.357..42.933 rows=1 loops=1)\n Index Cond: ((pc13emp08p = 0) AND (pc13anoped = 0) \nAND (pc13codped = 0) AND (pc13item = 0))\n Filter: ((pc13emp08 = 1) AND (pc13codigo = \n'9487-491C '::bpchar) AND (pc13cor = 1))\n Rows Removed by Filter: 45161\n Buffers: shared hit=10880\n Total runtime: 108.884 ms\n(35 rows)\n\n\n\nNested loop is ~12 times slower in this case. The server parameters are \nthe same. Unfortunattly, I don't have the explain in 8.4 version.\nAnd I noticed that the server load grew after the upgrade.\n\nBest regards,\n\nAlexandre\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jan 2013 13:34:15 -0200",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Nested loop and simple join query - slow after upgrade to 9.2"
},
{
"msg_contents": "On Fri, Jan 25, 2013 at 7:34 AM, alexandre - aldeia digital\n<[email protected]> wrote:\n> Hi,\n>\n> Last weekend, we upgrade a PG from 8.4 to 9.2 version (full\n> pg_dump/restore/vacuum/analyze). After this, some simple join querys\n> became very slow, maybe because the use of nested loops. Bellow, an example\n> with nestedloop on and off:\n\nWhat happens if you bump up default_statistics_target by a factor of\n10 or 100 and redo the analyze?\n\nHere it is finding 39 times more rows than expected:\n\nIndex Scan using ad_pc13t3_modpadrao on pc13t3 t1 (cost=0.00..6.21\nrows=1 width=65) (actual time=0.090..0.252 rows=39 loops=1)\n\nIt would interesting to know why that is.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jan 2013 10:29:28 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Nested loop and simple join query - slow after upgrade\n to 9.2"
},
{
"msg_contents": "Em 25-01-2013 16:29, Jeff Janes escreveu:\n> On Fri, Jan 25, 2013 at 7:34 AM, alexandre - aldeia digital\n> <[email protected]> wrote:\n>> Hi,\n>>\n>> Last weekend, we upgrade a PG from 8.4 to 9.2 version (full\n>> pg_dump/restore/vacuum/analyze). After this, some simple join querys\n>> became very slow, maybe because the use of nested loops. Bellow, an example\n>> with nestedloop on and off:\n>\n> What happens if you bump up default_statistics_target by a factor of\n> 10 or 100 and redo the analyze?\n\nBefore send the e-mail, the default_statistics_target was 500 and I \nreturn to 100 (default). I will try to set 1000.\n\n> Here it is finding 39 times more rows than expected:\n>\n> Index Scan using ad_pc13t3_modpadrao on pc13t3 t1 (cost=0.00..6.21\n> rows=1 width=65) (actual time=0.090..0.252 rows=39 loops=1)\n>\n> It would interesting to know why that is.\n\nThis is a partial index:\n\n\"ad_pc13t3_modpadrao\" btree (pc13emp08p, pc13anoped, pc13codped, \npc13codigo, pc13cor, pc13empins, pc13emp08, pc13tipin2, pc13insest) \nWHERE pc13emp08p = 0 AND pc13anoped = 0 AND pc13codped = 0 AND pc13item = 0\n\nI can't connect to databse now. I will retry tests in sunday.\n\nBest regards,\n\nAlexandre\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 25 Jan 2013 17:30:41 -0200",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Nested loop and simple join query - slow after upgrade\n to 9.2"
}
] |
[
{
"msg_contents": "Hi All,\n\nI think this is not new topic, I try to look for discussions about same\nsubject I find something like\n\nhttp://postgresql.1045698.n5.nabble.com/libpq-or-postgresql-performance-td5723158.html\nhttp://www.postgresql.org/message-id/[email protected]\n\nI will talk to you about my experiment with same issue\n\n\nI have table Materials created as:\n\npostgre version: 9.1\n\ncreate domain ID as integer;\n^\ncreate domain BIG_ID as bigint;\n^\ncreate domain SMALL_ID as smallint;\n^\ncreate domain GUID as varchar(32);\n^\ncreate domain GUIDSTRING as varchar(38);\n^\ncreate domain CURRENCY as numeric(18,4) default 0;\n^\ncreate domain AMOUNT as numeric(18,4) default 0;\n^\ncreate domain NAME as varchar(250);\n^\ncreate domain CODE as varchar(60);\n^\ncreate domain FOREIGNCODE as varchar(60);\n^\ncreate domain doc as varchar(40);\n^\ncreate domain NOTE as varchar;\n^\ncreate domain MEMO as varchar;\n^\ncreate domain BARCODE as varchar(40);\n^\ncreate domain IMAGE as bytea;\n^\ncreate domain OBJECT as varchar;\n^\ncreate domain CUR_PART_NAME as varchar(40);\n^\ncreate domain STRNAME as varchar(60);\n^\ncreate domain STRCODE as varchar(30);\n^\ncreate domain STRVALUE as varchar(250);\n\n^\ncreate domain SERIAL as varchar(60);\n\n^\ncreate domain D_DATE as DATE;\n\n^\ncreate domain D_TIME as TIME;\n\n^\ncreate domain D_DATETIME as timestamp;\n\n^\ncreate domain D_INTEGER as INTEGER;\n\n^\ncreate domain SHORT_BARCODE as varchar(20);\n\n^\ncreate domain XML as varchar;\n^\ncreate domain D_FLOAT as double precision default 0;\n\n^\ncreate domain D_DOUBLE as double precision default 0;\n\n^\ncreate domain KIND as smallint;\n\n^\ncreate domain D_LEVEL as smallint;\n\n^\ncreate domain D_SIGN as smallint;\n\n^\ncreate domain D_EXCHANGE as double precision;\n\n^\ncreate domain Rarefy as varchar(5);\n^\ncreate domain LONGNAME as varchar(250);\n^\ncreate table \"Materials\"\n(\n \"MatID\" serial,\n \"MatSite\" SMALL_ID,\n \"MatChanged\" BIG_ID,\n \"MatParent\" D_INTEGER default 0 not null ,\n \"MatIsBook\" BOOLEAN default 0 not null ,\n \"MatConsist\" D_INTEGER,\n \"MatClass\" D_INTEGER,\n \"MatName\" Name default not null ,\n \"MatCode\" Code default not null ,\n \"MatForeignCode\" ForeignCode,\n \"MatBarcode\" BARCODE,\n \"MatMaxLimit\" AMOUNT,\n \"MatMinLimit\" AMOUNT,\n \"MatAge\" AMOUNT,\n \"MatPack\" D_INTEGER,\n \"MatLevel\" D_LEVEL default 0 not null ,\n \"MatUnity\" D_INTEGER,\n \"MatDefUnity\" D_INTEGER,\n \"MatPackUnity\" D_INTEGER,\n \"MatPackSize\" Amount,\n \"MatDefWeight\" AMOUNT,\n \"MatApproxWeight\" AMOUNT,\n \"MatDiscount\" AMOUNT,\n \"MatNoDiscount\" BOOLEAN,\n \"MatQntRebate\" AMOUNT,\n \"MatIsQntRebate\" BOOLEAN default 0 not null ,\n \"MatDefGroup\" D_INTEGER,\n \"MatSpecification\" Note,\n \"MatBonus\" AMOUNT,\n \"MatBonusBase\" AMOUNT,\n \"MatRarefy\" Rarefy,\n \"MatDefQnt\" AMOUNT,\n \"MatIsUnbounded\" BOOLEAN,\n \"MatIsActive\" BOOLEAN default 0 not null ,\n \"MatIsSerial\" BOOLEAN default 0 not null ,\n \"MatIsPacked\" BOOLEAN default 0 not null ,\n \"MatIsBatched\" BOOLEAN default 0 not null ,\n \"MatIsWeb\" BOOLEAN,\n \"MatIsAssist\" BOOLEAN default 0 not null ,\n \"MatIsIgnored\" BOOLEAN default 0 not null ,\n \"MatAccVendor\" D_INTEGER,\n \"MatIsConsignment\" BOOLEAN default 0 not null ,\n \"MatIsVariety\" BOOLEAN default 0 not null ,\n \"MatIsMeal\" BOOLEAN default 0 not null ,\n \"MatIsMaintain\" BOOLEAN default 0 not null ,\n \"MatIsCategory\" BOOLEAN default 0 not null ,\n \"MatAccCustomer\" D_INTEGER,\n \"MatDepartment\" ID,\n \"MatChain\" D_INTEGER default 0 not null ,\n \"MatPhotoType\" STRCODE default 0 not null ,\n \"MatPhoto\" LONGNAME,\n \"MatCommission\" AMOUNT,\n \"MatFactor\" AMOUNT,\n \"MatTax\" AMOUNT,\n \"MatFees\" AMOUNT,\n \"MatExpiredThrough\" D_INTEGER,\n \"MatExpiredBy\" D_INTEGER,\n \"MatActivateDate\" D_DATE default 'NOW' not null ,\n \"MatCreatedDate\" D_DATE default 'NOW' not null ,\n \"MatModel\" D_INTEGER,\n \"MatNote\" Note,\n \"MatPoint\" D_INTEGER,\n \"MatOriginal\" D_INTEGER,\n \"MatRevision\" BIG_ID default 0 not null\n)\n^\nalter table \"Materials\" add constraint \"pkMaterials\" primary key (\"MatID\")\n^\ncreate index \"IdxMatIsBook\" on \"Materials\" (\"MatIsBook\" )\n^\ncreate index \"IdxMatName\" on \"Materials\" (\"MatName\" )\n^\ncreate unique index \"IdxMatCode\" on \"Materials\" (\"MatCode\" )\n^\ncreate unique index \"IdxMatBarcode\" on \"Materials\" (\"MatBarcode\" )\n^\ncreate index \"IdxMatIsWeb\" on \"Materials\" (\"MatIsWeb\" )\n^\ncreate index \"IdxMatIsAssist\" on \"Materials\" (\"MatIsAssist\" )\n^\ncreate index \"IdxMatIsConsignment\" on \"Materials\" (\"MatIsConsignment\" )\n^\n\nwith 31000 record\n\nI connect to my server through ADSL connection 4Mbps\n\nI try this query\n\nselect \"MatID\", \"MatName\", \"MatCode\"\nfrom \"Materials\"\nwhere \"MatCode\" ~* '^1101'\norder by \"MatCode\"\nlimit 2\n\nby wireshark I monitor TCP packets I found total data transmit/received 400B\nI took about 2.5s to fetch results why ??????\n\nafter trying every solution mentioned in previous messages (DNS, tcpip,\npostgresql.conf, ...) not found any improve,\n\nI tried this one:\n\nusing Zebedee(http://www.winton.org.uk/zebedee/)\nI build an IP tunnel between me and my data server (I used compression\nlevel 9)\n\nsurprisingly same query now took about 600 ms, \"very impressive\"\n\nsame thing with this query\nselect \"MatID\", \"MatName\", \"MatCode\", \"MatParent\" from \"Materials\"\nfrom 48s down to 17s\n\nall these tests done on same connection with same devices so same dns,\ntcp-ip, ....\n\nnow I am sure there is something wrong with libpq.\n\nHi All,I think this is not new topic, I try to look for discussions about same subject I find something likehttp://postgresql.1045698.n5.nabble.com/libpq-or-postgresql-performance-td5723158.html\nhttp://www.postgresql.org/message-id/[email protected] will talk to you about my experiment with same issueI have table Materials created as:\npostgre version: 9.1create domain ID as integer;^create domain BIG_ID as bigint;^create domain SMALL_ID as smallint;^create domain GUID as varchar(32);^create domain GUIDSTRING as varchar(38);\n^create domain CURRENCY as numeric(18,4) default 0;^create domain AMOUNT as numeric(18,4) default 0;^create domain NAME as varchar(250);^create domain CODE as varchar(60);^create domain FOREIGNCODE as varchar(60);\n^create domain doc as varchar(40);^create domain NOTE as varchar;^create domain MEMO as varchar;^create domain BARCODE as varchar(40);^create domain IMAGE as bytea;^create domain OBJECT as varchar;\n^create domain CUR_PART_NAME as varchar(40);^create domain STRNAME as varchar(60);^create domain STRCODE as varchar(30);^create domain STRVALUE as varchar(250);^create domain SERIAL as varchar(60);\n^create domain D_DATE as DATE;^create domain D_TIME as TIME;^create domain D_DATETIME as timestamp;^create domain D_INTEGER as INTEGER;^create domain SHORT_BARCODE as varchar(20);\n^create domain XML as varchar;^create domain D_FLOAT as double precision default 0;^create domain D_DOUBLE as double precision default 0;^create domain KIND as smallint;^\ncreate domain D_LEVEL as smallint;^create domain D_SIGN as smallint;^create domain D_EXCHANGE as double precision;^create domain Rarefy as varchar(5);^create domain LONGNAME as varchar(250);\n^create table \"Materials\"( \"MatID\" serial, \"MatSite\" SMALL_ID, \"MatChanged\" BIG_ID, \"MatParent\" D_INTEGER default 0 not null , \"MatIsBook\" BOOLEAN default 0 not null ,\n \"MatConsist\" D_INTEGER, \"MatClass\" D_INTEGER, \"MatName\" Name default not null , \"MatCode\" Code default not null , \"MatForeignCode\" ForeignCode, \"MatBarcode\" BARCODE,\n \"MatMaxLimit\" AMOUNT, \"MatMinLimit\" AMOUNT, \"MatAge\" AMOUNT, \"MatPack\" D_INTEGER, \"MatLevel\" D_LEVEL default 0 not null , \"MatUnity\" D_INTEGER,\n \"MatDefUnity\" D_INTEGER, \"MatPackUnity\" D_INTEGER, \"MatPackSize\" Amount, \"MatDefWeight\" AMOUNT, \"MatApproxWeight\" AMOUNT, \"MatDiscount\" AMOUNT,\n \"MatNoDiscount\" BOOLEAN, \"MatQntRebate\" AMOUNT, \"MatIsQntRebate\" BOOLEAN default 0 not null , \"MatDefGroup\" D_INTEGER, \"MatSpecification\" Note, \"MatBonus\" AMOUNT,\n \"MatBonusBase\" AMOUNT, \"MatRarefy\" Rarefy, \"MatDefQnt\" AMOUNT, \"MatIsUnbounded\" BOOLEAN, \"MatIsActive\" BOOLEAN default 0 not null , \"MatIsSerial\" BOOLEAN default 0 not null ,\n \"MatIsPacked\" BOOLEAN default 0 not null , \"MatIsBatched\" BOOLEAN default 0 not null , \"MatIsWeb\" BOOLEAN, \"MatIsAssist\" BOOLEAN default 0 not null , \"MatIsIgnored\" BOOLEAN default 0 not null ,\n \"MatAccVendor\" D_INTEGER, \"MatIsConsignment\" BOOLEAN default 0 not null , \"MatIsVariety\" BOOLEAN default 0 not null , \"MatIsMeal\" BOOLEAN default 0 not null , \"MatIsMaintain\" BOOLEAN default 0 not null ,\n \"MatIsCategory\" BOOLEAN default 0 not null , \"MatAccCustomer\" D_INTEGER, \"MatDepartment\" ID, \"MatChain\" D_INTEGER default 0 not null , \"MatPhotoType\" STRCODE default 0 not null ,\n \"MatPhoto\" LONGNAME, \"MatCommission\" AMOUNT, \"MatFactor\" AMOUNT, \"MatTax\" AMOUNT, \"MatFees\" AMOUNT, \"MatExpiredThrough\" D_INTEGER, \"MatExpiredBy\" D_INTEGER,\n \"MatActivateDate\" D_DATE default 'NOW' not null , \"MatCreatedDate\" D_DATE default 'NOW' not null , \"MatModel\" D_INTEGER, \"MatNote\" Note, \"MatPoint\" D_INTEGER,\n \"MatOriginal\" D_INTEGER, \"MatRevision\" BIG_ID default 0 not null )^alter table \"Materials\" add constraint \"pkMaterials\" primary key (\"MatID\")^create index \"IdxMatIsBook\" on \"Materials\" (\"MatIsBook\" )\n^create index \"IdxMatName\" on \"Materials\" (\"MatName\" )^create unique index \"IdxMatCode\" on \"Materials\" (\"MatCode\" )^create unique index \"IdxMatBarcode\" on \"Materials\" (\"MatBarcode\" )\n^create index \"IdxMatIsWeb\" on \"Materials\" (\"MatIsWeb\" )^create index \"IdxMatIsAssist\" on \"Materials\" (\"MatIsAssist\" )^create index \"IdxMatIsConsignment\" on \"Materials\" (\"MatIsConsignment\" )\n^with 31000 recordI connect to my server through ADSL connection 4MbpsI try this queryselect \"MatID\", \"MatName\", \"MatCode\" from \"Materials\"where \"MatCode\" ~* '^1101'\norder by \"MatCode\"limit 2by wireshark I monitor TCP packets I found total data transmit/received 400BI took about 2.5s to fetch results why ??????after trying every solution mentioned in previous messages (DNS, tcpip, postgresql.conf, ...) not found any improve,\nI tried this one:using Zebedee(http://www.winton.org.uk/zebedee/)I build an IP tunnel between me and my data server (I used compression level 9)surprisingly same query now took about 600 ms, \"very impressive\" \nsame thing with this queryselect \"MatID\", \"MatName\", \"MatCode\", \"MatParent\" from \"Materials\"from 48s down to 17sall these tests done on same connection with same devices so same dns, tcp-ip, ....\nnow I am sure there is something wrong with libpq.",
"msg_date": "Sun, 27 Jan 2013 03:15:45 +0300",
"msg_from": "belal hamed <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL over internet"
},
{
"msg_contents": "On Sun, Jan 27, 2013 at 03:15:45AM +0300, belal hamed wrote:\n> \n> I connect to my server through ADSL connection 4Mbps\n> \n\nHere is your \"problem\". You need to understand the performance\ncharacteristics of your communication channel. ADSL is a VERY\nasymmetric communications channel. Down is usually much faster\nthan up.\n\n> I try this query\n> \n> select \"MatID\", \"MatName\", \"MatCode\"\n> from \"Materials\"\n> where \"MatCode\" ~* '^1101'\n> order by \"MatCode\"\n> limit 2\n> \n> by wireshark I monitor TCP packets I found total data transmit/received 400B\n> I took about 2.5s to fetch results why ??????\n> \n> after trying every solution mentioned in previous messages (DNS, tcpip,\n> postgresql.conf, ...) not found any improve,\n> \n> I tried this one:\n> \n> using Zebedee(http://www.winton.org.uk/zebedee/)\n> I build an IP tunnel between me and my data server (I used compression\n> level 9)\n> \n> surprisingly same query now took about 600 ms, \"very impressive\"\n> \n> same thing with this query\n> select \"MatID\", \"MatName\", \"MatCode\", \"MatParent\" from \"Materials\"\n> from 48s down to 17s\n> \n> all these tests done on same connection with same devices so same dns,\n> tcp-ip, ....\n> \n> now I am sure there is something wrong with libpq.\n\nWhen you wrap the communication channel in an IP tunnel, you are\ncollapsing much of the syn-ack of the libpq protocol. You can see\nthe same effect trying to run any sort of X windows application.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 26 Jan 2013 20:45:05 -0600",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL over internet"
},
{
"msg_contents": "\n\nOn 27/01/13 02:45, [email protected] wrote:\n> On Sun, Jan 27, 2013 at 03:15:45AM +0300, belal hamed wrote:\n>>\n>> I connect to my server through ADSL connection 4Mbps\n>>\n>\n> Here is your \"problem\". You need to understand the performance\n> characteristics of your communication channel. ADSL is a VERY\n> asymmetric communications channel. Down is usually much faster\n> than up.\n\nI'm not convinced that ADSL is your problem.\n\n1. Try just SSH directly to the server, and run psql, and run a query \nlike this one:\n SELECT 'This is a test message' AS status;\n\nThis should run in under 1ms; it also means that we don't have to worry \nabout the details of your database-schema for the purposes of this problem.\n\n2. Try creating a simple SSH tunnel and using your application locally. \nFor example, if your server runs Postgresql on port 5432, run this SSH \ncommand:\n ssh -L 5432:localhost:5432 your_server_hostname\nand then connect to your LOCAL (localhost) port 5432; SSH will handle \nthe port forwarding. [Explanation: \"localhost\" in the SSH command is in \nthe context of your_server_hostname]\nHow does it work now?\n\n3. Try configuration you are currently using, but with the above query.\n\nIt should be possible to distinguish between:\n - slowness caused by the database query itself\n - slowness caused by the network fundamentally.\n - slowness caused by the postgresql/libpq.\n\nHopefully, you'll be able to narrow it down a bit.\n\nHTH,\n\nRichard\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 27 Jan 2013 06:13:04 +0000",
"msg_from": "Richard Neill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL over internet"
},
{
"msg_contents": ">Here is your \"problem\". You need to understand the performance\n>characteristics of your communication channel. ADSL is a VERY\n>asymmetric communications channel. Down is usually much faster\n>than up.\n\nHow it could be ADSL problem when it's the same in tow tests ?\nbeside data transferred when using tunnel is much bigger (about 10KB) than\ndirect connection (400B)\nso it should be slower when using tunnel but the result shows it was faster\n!!!!\n\n>When you wrap the communication channel in an IP tunnel, you are\n>collapsing much of the syn-ack of the libpq protocol. You can see\n>the same effect trying to run any sort of X windows application.\n\nIf that so, why these is not same option in Postgresql, Is it necessary to\nuse IP tunnel to do that and perform fast fetch?\n\n\n>Try creating a simple SSH tunnel\nmy server is windows 7\n\n>It should be possible to distinguish between:\n> - slowness caused by the database query itself\n> - slowness caused by the network fundamentally.\n> - slowness caused by the postgresql/libpq.\n\nI run the same query on same network connection so I eliminate the\nslowness caused by the database query and network fundamentally,\nnothing left but postgresql/libpq\n\nnot anyone consider there may be a bug when connection to a remote server\nover internet in libpq\nthe only different when I used the tunnel is I connect to localhost\ninstead of server IP or domain name (I try both)\n\n>Here is your \"problem\". You need to understand the performance\n>characteristics of your communication channel. ADSL is a VERY\n>asymmetric communications channel. Down is usually much faster\n>than up.How it could be ADSL problem when it's the same in tow tests ?beside data transferred when using tunnel is much bigger (about 10KB) than direct connection (400B)so it should be slower when using tunnel but the result shows it was faster !!!!\n>When you wrap the communication channel in an IP tunnel, you are\n>collapsing much of the syn-ack of the libpq protocol. You can see\n>the same effect trying to run any sort of X windows application.If that so, why these is not same option in Postgresql, Is it necessary to use IP tunnel to do that and perform fast fetch?\n>Try creating a simple SSH tunnelmy server is windows 7>It should be possible to distinguish between:\n> - slowness caused by the database query itself\n> - slowness caused by the network fundamentally.\n> - slowness caused by the postgresql/libpq.I run the same query on same network connection so I eliminate the slowness caused by the database query and network fundamentally,nothing left but postgresql/libpq\nnot anyone consider there may be a bug when connection to a remote server over internet in libpqthe only different when I used the tunnel is I connect to localhost instead of server IP or domain name (I try both)",
"msg_date": "Sun, 27 Jan 2013 15:09:55 +0300",
"msg_from": "belal hamed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL over internet"
},
{
"msg_contents": "On Sun, Jan 27, 2013 at 03:09:55PM +0300, belal hamed wrote:\n> >Here is your \"problem\". You need to understand the performance\n> >characteristics of your communication channel. ADSL is a VERY\n> >asymmetric communications channel. Down is usually much faster\n> >than up.\n> \n> How it could be ADSL problem when it's the same in tow tests ?\n> beside data transferred when using tunnel is much bigger (about 10KB) than\n> direct connection (400B)\n> so it should be slower when using tunnel but the result shows it was faster\n> !!!!\n> \n\nDue to the asymmetric communication, a bigger data output in a single\npacket (the result of using compression on the tunnel) will get sent\nwithout waiting. A smaller packet will delay a bit waiting for some\nadditional data, which in your case does not come. You may want to \ncheck out this document describing some of what I believe is causing\nyour observed behavior:\n\nhttp://www.faqs.org/docs/Linux-HOWTO/ADSL-Bandwidth-Management-HOWTO.html#BACKGROUND\n\n> >When you wrap the communication channel in an IP tunnel, you are\n> >collapsing much of the syn-ack of the libpq protocol. You can see\n> >the same effect trying to run any sort of X windows application.\n> \n> If that so, why these is not same option in Postgresql, Is it necessary to\n> use IP tunnel to do that and perform fast fetch?\n> \n> \n> >Try creating a simple SSH tunnel\n> my server is windows 7\n> \n> >It should be possible to distinguish between:\n> > - slowness caused by the database query itself\n> > - slowness caused by the network fundamentally.\n> > - slowness caused by the postgresql/libpq.\n> \n> I run the same query on same network connection so I eliminate the\n> slowness caused by the database query and network fundamentally,\n> nothing left but postgresql/libpq\n> \n> not anyone consider there may be a bug when connection to a remote server\n> over internet in libpq\n> the only different when I used the tunnel is I connect to localhost\n> instead of server IP or domain name (I try both)\n\nYou would find that if you log in to your DB server and use libpq\nto it over a localhost connection that the performance is good which\npoints to your network as the problem.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 27 Jan 2013 11:33:28 -0600",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL over internet"
},
{
"msg_contents": "\n\n> From: [email protected]\n[mailto:[email protected]] On Behalf Of belal hamed\n> Sent: 27 January 2013 13:16\n> To: [email protected]\n> Subject: [PERFORM] PostgreSQL over internet\n\n\n> by wireshark I monitor TCP packets I found total data transmit/received\n400B\n> I took about 2.5s to fetch results why ??????\n\n\nAre you sure there's not any QOS somewhere that is slowing down the packets\nfor port 5432 or whichever you're using for PostgreSQL?\nPerhaps temporarily changing PostgreSQL's listening port to something else\nmight be a good test.\n\nDavid\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Jan 2013 09:23:24 +1300",
"msg_from": "\"David Rowley\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL over internet"
},
{
"msg_contents": ">Due to the asymmetric communication, a bigger data output in a single\n>packet (the result of using compression on the tunnel) will get sent\n>without waiting. A smaller packet will delay a bit waiting for some\n>additional data, which in your case does not come. You may want to\n>check out this document describing some of what I believe is causing\n>your observed behavior:\n\nSlow\n\n\n\nFast\n\n\nAs I said before I try a small query and big one the result same using IP\nTunnel is fast.\n\n>You would find that if you log in to your DB server and use libpq\n>to it over a localhost connection that the performance is good which\n>points to your network as the problem.\n\nwhen I said I connect to localhost I meant I connect to IP tunnel client\nwitch connect me to the remote PGServer\n\n>Are you sure there's not any QOS somewhere that is slowing down the packets\n>for port 5432 or whichever you're using for PostgreSQL?\n>Perhaps temporarily changing PostgreSQL's listening port to something else\n>might be a good test.\n\nyes I am sure, and if there is any it must affect both test.\n\nBest regards to all.\n\n>Due to the asymmetric communication, a bigger data output in a single\n>packet (the result of using compression on the tunnel) will get sent\n>without waiting. A smaller packet will delay a bit waiting for some\n>additional data, which in your case does not come. You may want to\n>check out this document describing some of what I believe is causing\n>your observed behavior:Slow\nFast\nAs I said before I try a small query and big one the result same using IP Tunnel is fast.>You would find that if you log in to your DB server and use libpq\n>to it over a localhost connection that the performance is good which\n>points to your network as the problem.\nwhen I said I connect to localhost I meant I connect to IP tunnel client witch connect me to the remote PGServer>Are you sure there's not any QOS somewhere that is slowing down the packets\n>for port 5432 or whichever you're using for PostgreSQL?\n>Perhaps temporarily changing PostgreSQL's listening port to something else\n>might be a good test.yes I am sure, and if there is any it must affect both test.Best regards to all.",
"msg_date": "Mon, 28 Jan 2013 14:15:10 +0300",
"msg_from": "belal hamed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL over internet"
}
] |
[
{
"msg_contents": "If I drop and then recreate a trigger inside of a single transaction, how\ndoes it affect other processes trying to use the same table? Can they just\nmerrily go along their way using the table, or will they be blocked by an\nexclusive lock?\n\nWe have a trigger that detects illegal drugs and dangerous chemicals (such\nas explosives and flammable compounds that can't be shipped by air). It's\nimplemented as a trigger to ensure that even improperly coded application\nsoftware can't accidentally let a customer order a prohibited compound.\n\nUnfortunately, the trigger's function is necessarily \"heavyweight\" and slow.\n\nThe drop-and-restore-trigger operation is needed when we're copying data\none server to another. Since the data on the primary source have already\nbeen checked, there's no need to let the trigger re-check every row. When\nI drop-and-recreate the trigger for the duration of a COPY operation, it\nspeeds the operation from (for example) 30 minutes to 15 seconds.\n\nBut if the drop-and-restore-trigger operation blocks all access to the\ntables, that's a problem.\n\nThanks,\nCraig\n\nIf I drop and then recreate a trigger inside of a single transaction, how does it affect other processes trying to use the same table? Can they just merrily go along their way using the table, or will they be blocked by an exclusive lock?\nWe have a trigger that detects illegal drugs and dangerous chemicals (such as explosives and flammable compounds that can't be shipped by air). It's implemented as a trigger to ensure that even improperly coded application software can't accidentally let a customer order a prohibited compound.\nUnfortunately, the trigger's function is necessarily \"heavyweight\" and slow.The drop-and-restore-trigger operation is needed when we're copying data one server to another. Since the data on the primary source have already been checked, there's no need to let the trigger re-check every row. When I drop-and-recreate the trigger for the duration of a COPY operation, it speeds the operation from (for example) 30 minutes to 15 seconds.\nBut if the drop-and-restore-trigger operation blocks all access to the tables, that's a problem.Thanks,Craig",
"msg_date": "Mon, 28 Jan 2013 10:54:20 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Triggers and transactions"
},
{
"msg_contents": "On 28/01/13 18:54, Craig James wrote:\n> If I drop and then recreate a trigger inside of a single transaction, \n> how does it affect other processes trying to use the same table? Can \n> they just merrily go along their way using the table, or will they be \n> blocked by an exclusive lock?\n>\nI *think* it blocks, but in any case, read on...\n\n> We have a trigger that detects illegal drugs and dangerous chemicals \n> (such as explosives and flammable compounds that can't be shipped by air).\n\n<pedantry mode=\"full\">detects a reference to illegal... (unless you've \nhooked your RDBMS up to some sort of x-ray scanner, in which case I \nsalute you sir)</pedantry>\n\n> Unfortunately, the trigger's function is necessarily \"heavyweight\" and \n> slow.\n>\n> The drop-and-restore-trigger operation is needed when we're copying \n> data one server to another. \n\nRun the copy as a different user than ordinary applications (a good idea \nanyway). Then the function can just check current_user and exit for the \ncopy.\n\n--\n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 28 Jan 2013 19:10:44 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Triggers and transactions"
},
{
"msg_contents": "On Mon, Jan 28, 2013 at 10:54 AM, Craig James <[email protected]> wrote:\n\n> But if the drop-and-restore-trigger operation blocks all access to the\n> tables, that's a problem.\n>\n\nWere the triggers in question created with \"CREATE CONSTRAINT TRIGGER\"? If\nnot, \"ALTER TABLE foo DISABLE TRIGGER USER\" may do what you need here.\n\nrls\n\n-- \n:wq\n\nOn Mon, Jan 28, 2013 at 10:54 AM, Craig James <[email protected]> wrote:\nBut if the drop-and-restore-trigger operation blocks all access to the tables, that's a problem.\nWere the triggers in question created with \"CREATE CONSTRAINT TRIGGER\"? If not, \"ALTER TABLE foo DISABLE TRIGGER USER\" may do what you need here.\nrls-- :wq",
"msg_date": "Mon, 28 Jan 2013 11:25:18 -0800",
"msg_from": "Rosser Schwarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Triggers and transactions"
}
] |
[
{
"msg_contents": "So where I'm working, a performance issue was identified that affected many\nfunctions, because the (SQL language) functions took an int argument used\nit in a where clause against a column (config_id) that was stored in\nvarchar format, leading to an inefficient casting when the query was\nparameterized. We could work around that with (select $3::text) instead of\njust $3, but since the data is actually all numbers under 65k, we altered\nthe data type of the column to smallint, rather than editing a boatload of\nfunctions with a hacky workaround.\n\nFor most functions, this fixed the problem.\n\nHowever, it had a drastically-negative impact on the query in question,\nwhich was originally taking 2 minutes, 45 seconds. After adding a couple\nindexes with the config_id still as a varchar, that time is reduced down to\n42 seconds. However when the data type is smallint, the query runs for\nmany hours - I let it run for 4.5 hours yesterday before cancelling it.\n\nIt's pretty clear that the planner is making horrid misestimates and\npicking a terrible plan. I would appreciate any advice for getting this\ninto a better state.\n\nHere are the explain plans:\n\nWhen config_id is a varchar, it executes in 42 seconds:\nhttp://explain.depesz.com/s/wuf\n\nWhen config_id is a smallint, it runs too long to allow to complete, but\nclearly the plan is bad:\nhttp://explain.depesz.com/s/u5P\n\nHere is the query, along with rowcounts and schema of every table involved\nin the query:\nhttp://pgsql.privatepaste.com/c66fd497c9\n\nPostgreSQL version is 8.4, and most of our GUC's are default.\n\nThanks in advance for any suggestions.\n-- \nCasey Allen Shobe\[email protected]\n\nSo where I'm working, a performance issue was identified that affected many functions, because the (SQL language) functions took an int argument used it in a where clause against a column (config_id) that was stored in varchar format, leading to an inefficient casting when the query was parameterized. We could work around that with (select $3::text) instead of just $3, but since the data is actually all numbers under 65k, we altered the data type of the column to smallint, rather than editing a boatload of functions with a hacky workaround.\nFor most functions, this fixed the problem.However, it had a drastically-negative impact on the query in question, which was originally taking 2 minutes, 45 seconds. After adding a couple indexes with the config_id still as a varchar, that time is reduced down to 42 seconds. However when the data type is smallint, the query runs for many hours - I let it run for 4.5 hours yesterday before cancelling it.\nIt's pretty clear that the planner is making horrid misestimates and picking a terrible plan. I would appreciate any advice for getting this into a better state.\nHere are the explain plans:When config_id is a varchar, it executes in 42 seconds:http://explain.depesz.com/s/wuf\nWhen config_id is a smallint, it runs too long to allow to complete, but clearly the plan is bad:http://explain.depesz.com/s/u5P\nHere is the query, along with rowcounts and schema of every table involved in the query:http://pgsql.privatepaste.com/c66fd497c9\nPostgreSQL version is 8.4, and most of our GUC's are default.Thanks in advance for any suggestions.-- Casey Allen [email protected]",
"msg_date": "Fri, 1 Feb 2013 12:11:53 -0500",
"msg_from": "Casey Allen Shobe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fighting the planner >:-("
},
{
"msg_contents": "My apologies - I included the wrong version of the query before...during\n testing I had tried deparameterizing a few of the input parameters. I\nalso accidentally left out the schema for the network_config_tot2 table\nfrom the initial paste.\n\nHere is an updated paste, which shows the correct query in a prepare\nstatements. The explain plans are from explain execute hewitt_test (...):\nhttp://pgsql.privatepaste.com/00c582c840\n\nHere is the correct explain plan for this statement (still bad):\nhttp://explain.depesz.com/s/c46\n\n\n\nOn Fri, Feb 1, 2013 at 12:11 PM, Casey Allen Shobe <[email protected]> wrote:\n\n> So where I'm working, a performance issue was identified that affected\n> many functions, because the (SQL language) functions took an int argument\n> used it in a where clause against a column (config_id) that was stored in\n> varchar format, leading to an inefficient casting when the query was\n> parameterized. We could work around that with (select $3::text) instead of\n> just $3, but since the data is actually all numbers under 65k, we altered\n> the data type of the column to smallint, rather than editing a boatload of\n> functions with a hacky workaround.\n>\n> For most functions, this fixed the problem.\n>\n> However, it had a drastically-negative impact on the query in question,\n> which was originally taking 2 minutes, 45 seconds. After adding a couple\n> indexes with the config_id still as a varchar, that time is reduced down to\n> 42 seconds. However when the data type is smallint, the query runs for\n> many hours - I let it run for 4.5 hours yesterday before cancelling it.\n>\n> It's pretty clear that the planner is making horrid misestimates and\n> picking a terrible plan. I would appreciate any advice for getting this\n> into a better state.\n>\n> Here are the explain plans:\n>\n> When config_id is a varchar, it executes in 42 seconds:\n> http://explain.depesz.com/s/wuf\n>\n> When config_id is a smallint, it runs too long to allow to complete, but\n> clearly the plan is bad:\n> http://explain.depesz.com/s/u5P\n>\n> Here is the query, along with rowcounts and schema of every table involved\n> in the query:\n> http://pgsql.privatepaste.com/c66fd497c9\n>\n> PostgreSQL version is 8.4, and most of our GUC's are default.\n>\n> Thanks in advance for any suggestions.\n> --\n> Casey Allen Shobe\n> [email protected]\n>\n>\n>\n\n\n-- \nCasey Allen Shobe\[email protected]\n\nMy apologies - I included the wrong version of the query before...during testing I had tried deparameterizing a few of the input parameters. I also accidentally left out the schema for the network_config_tot2 table from the initial paste.\nHere is an updated paste, which shows the correct query in a prepare statements. The explain plans are from explain execute hewitt_test (...):http://pgsql.privatepaste.com/00c582c840\nHere is the correct explain plan for this statement (still bad):http://explain.depesz.com/s/c46\nOn Fri, Feb 1, 2013 at 12:11 PM, Casey Allen Shobe <[email protected]> wrote:\nSo where I'm working, a performance issue was identified that affected many functions, because the (SQL language) functions took an int argument used it in a where clause against a column (config_id) that was stored in varchar format, leading to an inefficient casting when the query was parameterized. We could work around that with (select $3::text) instead of just $3, but since the data is actually all numbers under 65k, we altered the data type of the column to smallint, rather than editing a boatload of functions with a hacky workaround.\nFor most functions, this fixed the problem.However, it had a drastically-negative impact on the query in question, which was originally taking 2 minutes, 45 seconds. After adding a couple indexes with the config_id still as a varchar, that time is reduced down to 42 seconds. However when the data type is smallint, the query runs for many hours - I let it run for 4.5 hours yesterday before cancelling it.\nIt's pretty clear that the planner is making horrid misestimates and picking a terrible plan. I would appreciate any advice for getting this into a better state.\nHere are the explain plans:When config_id is a varchar, it executes in 42 seconds:http://explain.depesz.com/s/wuf\nWhen config_id is a smallint, it runs too long to allow to complete, but clearly the plan is bad:http://explain.depesz.com/s/u5P\n\nHere is the query, along with rowcounts and schema of every table involved in the query:http://pgsql.privatepaste.com/c66fd497c9\nPostgreSQL version is 8.4, and most of our GUC's are default.Thanks in advance for any suggestions.-- Casey Allen Shobe\[email protected]\n\n\n-- Casey Allen [email protected]",
"msg_date": "Fri, 1 Feb 2013 12:54:09 -0500",
"msg_from": "Casey Allen Shobe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fighting the planner >:-("
},
{
"msg_contents": "On 01/02/13 17:54, Casey Allen Shobe wrote:\n> My apologies - I included the wrong version of the query \n> before...during testing I had tried deparameterizing a few of the \n> input parameters. I also accidentally left out the schema for the \n> network_config_tot2 table from the initial paste.\n>\n> Here is an updated paste, which shows the correct query in a prepare \n> statements. The explain plans are from explain execute hewitt_test (...):\n> http://pgsql.privatepaste.com/00c582c840\n>\n> Here is the correct explain plan for this statement (still bad):\n> http://explain.depesz.com/s/c46\n\nThree quick observations before the weekend.\n\n1. You said config_id was now \"smallint\" in your email, but it reads \n\"int\" in the pastes above.\n Doesn't matter much which, but just checking we've got the right pastes.\n\n2. The total estimated cost of both queries is about the same \n(477,225.19 for the varchar, 447,623.86 for the int).\n This suggests something about your configuration doesn't match the \nperformance of your machine, since presumably the int version is taking \nat least twice as long as the varchar one.\n\n3. Interestingly, the config_id search on both plans seems to be using a \nBitmap Index, so I'm not sure that's the root cause. However, the \nvarchar version seems to have a literal string it's matching against. If \nyou've manually substituted in a literal value, that could be skewing \nthe tests.\n\nAnd two things for you to try if you would:\n\n1. Can you just check and see if any of the row estimates are horribly \noff for any particular clause in the query?\n\n2. You mention your config settings are mostly at default. What's your \nwork_mem and can you increase it? You can issue a SET for the current \nsession, no need to change it globally. If you've got the RAM try \ndoubling it, then double it again. See what happens to your plan then.\n\n--\n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 01 Feb 2013 18:50:13 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fighting the planner >:-("
},
{
"msg_contents": "On Fri, Feb 1, 2013 at 1:50 PM, Richard Huxton <[email protected]> wrote:\n\n> 1. You said config_id was now \"smallint\" in your email, but it reads \"int\"\n> in the pastes above.\n> Doesn't matter much which, but just checking we've got the right pastes.\n>\n\nYou have the correct pastes. I did an alter to int in an attempt to see if\nthat made any difference. It didn't. It takes a couple hours to do that\nalter so I didn't convert it back to smallint.\n\n\n> 2. The total estimated cost of both queries is about the same (477,225.19\n> for the varchar, 447,623.86 for the int).\n> This suggests something about your configuration doesn't match the\n> performance of your machine, since presumably the int version is taking at\n> least twice as long as the varchar one.\n>\n\nConfiguration is pretty standard. As for the machine, it's a VM in an ESXi\nsetup, with dedicated resources. The disk is very fast and is barely\ntouched. One CPU sits at 100% for as long as I let the query run. There\nis 18GB RAM and PostgreSQL is the only service running on the machine.\n\n\n> 3. Interestingly, the config_id search on both plans seems to be using a\n> Bitmap Index, so I'm not sure that's the root cause. However, the varchar\n> version seems to have a literal string it's matching against. If you've\n> manually substituted in a literal value, that could be skewing the tests.\n>\n\nThat's why I sent the followup re-parameterizing everything. And the\nexplains are on prepared statements with the parameterization. If I just\nput the parameter values directly into the query and run it straight, it's\nfast.\n\n1. Can you just check and see if any of the row estimates are horribly off\n> for any particular clause in the query?\n>\n\nYes they are. The places where the estimate is rows=1, particularly.\n\n\n> 2. You mention your config settings are mostly at default. What's your\n> work_mem and can you increase it? You can issue a SET for the current\n> session, no need to change it globally. If you've got the RAM try doubling\n> it, then double it again. See what happens to your plan then.\n>\n\n21861KB. I tried setting it to 192MB and re-preparing the same statement.\n Here's the explain execute: http://explain.depesz.com/s/pZ0, which looks\nidentical as before.\n\n-- \nCasey Allen Shobe\[email protected]\n\nOn Fri, Feb 1, 2013 at 1:50 PM, Richard Huxton <[email protected]> wrote:\n1. You said config_id was now \"smallint\" in your email, but it reads \"int\" in the pastes above.\n\n Doesn't matter much which, but just checking we've got the right pastes.You have the correct pastes. I did an alter to int in an attempt to see if that made any difference. It didn't. It takes a couple hours to do that alter so I didn't convert it back to smallint.\n 2. The total estimated cost of both queries is about the same (477,225.19 for the varchar, 447,623.86 for the int).\n\n This suggests something about your configuration doesn't match the performance of your machine, since presumably the int version is taking at least twice as long as the varchar one.\nConfiguration is pretty standard. As for the machine, it's a VM in an ESXi setup, with dedicated resources. The disk is very fast and is barely touched. One CPU sits at 100% for as long as I let the query run. There is 18GB RAM and PostgreSQL is the only service running on the machine.\n 3. Interestingly, the config_id search on both plans seems to be using a Bitmap Index, so I'm not sure that's the root cause. However, the varchar version seems to have a literal string it's matching against. If you've manually substituted in a literal value, that could be skewing the tests.\nThat's why I sent the followup re-parameterizing everything. And the explains are on prepared statements with the parameterization. If I just put the parameter values directly into the query and run it straight, it's fast.\n1. Can you just check and see if any of the row estimates are horribly off for any particular clause in the query?\nYes they are. The places where the estimate is rows=1, particularly. \n2. You mention your config settings are mostly at default. What's your work_mem and can you increase it? You can issue a SET for the current session, no need to change it globally. If you've got the RAM try doubling it, then double it again. See what happens to your plan then.\n21861KB. I tried setting it to 192MB and re-preparing the same statement. Here's the explain execute: http://explain.depesz.com/s/pZ0, which looks identical as before.\n-- Casey Allen [email protected]",
"msg_date": "Fri, 1 Feb 2013 14:18:47 -0500",
"msg_from": "Casey Allen Shobe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fighting the planner >:-("
},
{
"msg_contents": "Rhodiumtoad on IRC helped me figure out how to move part of the query into\na CTE in order to work around the planner problem. This is a hack but it\nbrings the time down from many hours to 17 seconds, which is better than it\nwas even with the better plan in the first place! For some reason it\nactually gets 2 seconds faster yet by putting it in a SQL function rather\nthan using prepare/execute.\n\nHopefully some improvements to the planner can come from this information?\n\nHere is the CTE version of the query:\nhttp://pgsql.privatepaste.com/2f7fd3f669\n...and here is it's explain analyze: http://explain.depesz.com/s/5ml\n\n-- \nCasey Allen Shobe\[email protected]\n\nRhodiumtoad on IRC helped me figure out how to move part of the query into a CTE in order to work around the planner problem. This is a hack but it brings the time down from many hours to 17 seconds, which is better than it was even with the better plan in the first place! For some reason it actually gets 2 seconds faster yet by putting it in a SQL function rather than using prepare/execute.\nHopefully some improvements to the planner can come from this information?Here is the CTE version of the query: http://pgsql.privatepaste.com/2f7fd3f669\n...and here is it's explain analyze: http://explain.depesz.com/s/5ml-- Casey Allen [email protected]",
"msg_date": "Fri, 1 Feb 2013 16:02:26 -0500",
"msg_from": "Casey Allen Shobe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fighting the planner >:-("
},
{
"msg_contents": "2013/2/1 Casey Allen Shobe <[email protected]>:\n> Hopefully some improvements to the planner can come from this information?\n>\n> Here is the CTE version of the query:\n> http://pgsql.privatepaste.com/2f7fd3f669\n> ...and here is it's explain analyze: http://explain.depesz.com/s/5ml\n\nEstimated rows for ‘hewitt_1_0_factors_precalc_new’ are 1000x less then actual.\nAnd for ‘census_user’ estimation is 100x less, then actual.\n\nHow many rows are in those tables and what is your statistics target?\n\n\n-- \nVictor Y. Yegorov\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 1 Feb 2013 23:12:43 +0200",
"msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGAINCV0LPQvtGA0L7Qsg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fighting the planner >:-("
},
{
"msg_contents": "On Fri, Feb 1, 2013 at 4:12 PM, Виктор Егоров <[email protected]> wrote:\n\n> Estimated rows for ‘hewitt_1_0_factors_precalc_new’ are 1000x less then\n> actual.\n> And for ‘census_user’ estimation is 100x less, then actual.\n>\n> How many rows are in those tables and what is your statistics target?\n>\n\nRowcounts are shown in the earlier paste link, but apparently I forgot to\ninclude the census table - hewitt_1_0_factors_precalc_new has 4,135,890\nrows, and census_user has 1846439 rows.\n\nStatistics target is the default at 100.\n\n-- \nCasey Allen Shobe\[email protected]\n\nOn Fri, Feb 1, 2013 at 4:12 PM, Виктор Егоров <[email protected]> wrote:\nEstimated rows for ‘hewitt_1_0_factors_precalc_new’ are 1000x less then actual.\n\nAnd for ‘census_user’ estimation is 100x less, then actual.\n\nHow many rows are in those tables and what is your statistics target?Rowcounts are shown in the earlier paste link, but apparently I forgot to include the census table - hewitt_1_0_factors_precalc_new has 4,135,890 rows, and census_user has 1846439 rows. \nStatistics target is the default at 100.-- Casey Allen [email protected]",
"msg_date": "Fri, 1 Feb 2013 16:18:34 -0500",
"msg_from": "Casey Allen Shobe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fighting the planner >:-("
},
{
"msg_contents": "2013/2/1 Casey Allen Shobe <[email protected]>:\n> Rowcounts are shown in the earlier paste link, but apparently I forgot to\n> include the census table - hewitt_1_0_factors_precalc_new has 4,135,890\n> rows, and census_user has 1846439 rows.\n>\n> Statistics target is the default at 100.\n\nI would try the following:\nALTER TABLE hewitt_1_0_factors_precalc_new SET STATISTICS 1000;\nALTER TABLE census_user SET STATISTICS 500;\nALTER TABLE census_output SET STATISTICS 500;\nand analyzed them after. I hope I guessed ‘census_output’ name correctly.\n\nAnd could you kindly share the plan after:\nSET enable_nestloop TO off;\n\n\n-- \nVictor Y. Yegorov\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 1 Feb 2013 23:55:21 +0200",
"msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGAINCV0LPQvtGA0L7Qsg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fighting the planner >:-("
},
{
"msg_contents": "I'll get back to you on this Monday - I'm heading home for the week now.\n\nHowever I was unable to adjust the statistics target using that command:\n\nalter table opportunity.census_user set statistics 500;\nERROR: syntax error at or near \"statistics\"\nLINE 1: alter table opportunity.census_user set statistics 500;\n ^\n\n\n\nOn Fri, Feb 1, 2013 at 4:55 PM, Виктор Егоров <[email protected]> wrote:\n\n> 2013/2/1 Casey Allen Shobe <[email protected]>:\n> > Rowcounts are shown in the earlier paste link, but apparently I forgot to\n> > include the census table - hewitt_1_0_factors_precalc_new has 4,135,890\n> > rows, and census_user has 1846439 rows.\n> >\n> > Statistics target is the default at 100.\n>\n> I would try the following:\n> ALTER TABLE hewitt_1_0_factors_precalc_new SET STATISTICS 1000;\n> ALTER TABLE census_user SET STATISTICS 500;\n> ALTER TABLE census_output SET STATISTICS 500;\n> and analyzed them after. I hope I guessed ‘census_output’ name correctly.\n>\n> And could you kindly share the plan after:\n> SET enable_nestloop TO off;\n>\n>\n> --\n> Victor Y. Yegorov\n>\n\n\n\n-- \nCasey Allen Shobe\[email protected]\n\nI'll get back to you on this Monday - I'm heading home for the week now.However I was unable to adjust the statistics target using that command:\nalter table opportunity.census_user set statistics 500;ERROR: syntax error at or near \"statistics\"LINE 1: alter table opportunity.census_user set statistics 500; ^\nOn Fri, Feb 1, 2013 at 4:55 PM, Виктор Егоров <[email protected]> wrote:\n2013/2/1 Casey Allen Shobe <[email protected]>:\n> Rowcounts are shown in the earlier paste link, but apparently I forgot to\n> include the census table - hewitt_1_0_factors_precalc_new has 4,135,890\n> rows, and census_user has 1846439 rows.\n>\n> Statistics target is the default at 100.\n\nI would try the following:\nALTER TABLE hewitt_1_0_factors_precalc_new SET STATISTICS 1000;\nALTER TABLE census_user SET STATISTICS 500;\nALTER TABLE census_output SET STATISTICS 500;\nand analyzed them after. I hope I guessed ‘census_output’ name correctly.\n\nAnd could you kindly share the plan after:\nSET enable_nestloop TO off;\n\n\n--\nVictor Y. Yegorov\n-- Casey Allen [email protected]",
"msg_date": "Fri, 1 Feb 2013 17:03:34 -0500",
"msg_from": "Casey Allen Shobe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fighting the planner >:-("
},
{
"msg_contents": "2013/2/2 Casey Allen Shobe <[email protected]>:\n> However I was unable to adjust the statistics target using that command:\n>\n> alter table opportunity.census_user set statistics 500;\n> ERROR: syntax error at or near \"statistics\"\n> LINE 1: alter table opportunity.census_user set statistics 500;\n\nI'm sorry for this, my bad. Try the following:\n\nALTER TABLE census_user ALTER parent_id SET STATISTICS 500;\nALTER TABLE census_user ALTER stakeholder_code SET STATISTICS 500;\n\nDo the same for all the columns in ‘hewitt_1_0_factors_precalc_new_pkey’ index,\nsetting target at 1000. I would also updated target for columns from\nthis filter:\nFilter: (((h.discount_type)::text = ANY ('{\"Avg Comp\",Blue}'::text[]))\nAND ((h.data_type)::text = 'Historical'::text) AND ((h.source)::text =\n'Hewitt 1.0'::text)\nAND ((h.service_catg_scheme)::text = '11+3'::text))\n\n\n-- \nVictor Y. Yegorov\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 2 Feb 2013 00:17:54 +0200",
"msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGAINCV0LPQvtGA0L7Qsg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fighting the planner >:-("
}
] |
[
{
"msg_contents": "Hi,\nI have read a lot of different information about the benefits of using numerical primary key vs alphanumerical primary key(small size). And what I am gathering is that for performance there is no more great advantage.\nIt seems like now RDBMS in general, postgres in particular handles pretty well joins on text indexes.\nDid I understand correctly?\nThanks,\nAnne\n\n\n\n\n\n\n\n\n\nHi,\nI have read a lot of different information about the benefits of using numerical primary key vs alphanumerical primary key(small size). And what I am gathering is that for performance there is no more great advantage.\nIt seems like now RDBMS in general, postgres in particular handles pretty well joins on text indexes.\nDid I understand correctly?\nThanks,\nAnne",
"msg_date": "Mon, 4 Feb 2013 21:52:39 +0000",
"msg_from": "Anne Rosset <[email protected]>",
"msg_from_op": true,
"msg_subject": "numerical primary key vs alphanumerical primary key"
},
{
"msg_contents": "2013/2/4 Anne Rosset <[email protected]>:\n> I have read a lot of different information about the benefits of using\n> numerical primary key vs alphanumerical primary key(small size). And what I\n> am gathering is that for performance there is no more great advantage.\n>\n> It seems like now RDBMS in general, postgres in particular handles pretty\n> well joins on text indexes.\n\nPlease, take a look at this blog post:\nhttp://www.depesz.com/2012/06/07/123-vs-depesz-what-is-faster/\n\n\n-- \nVictor Y. Yegorov\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Feb 2013 23:57:56 +0200",
"msg_from": "=?UTF-8?B?0JLQuNC60YLQvtGAINCV0LPQvtGA0L7Qsg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numerical primary key vs alphanumerical primary key"
},
{
"msg_contents": "The biggest difference in performance between text and integer keys is \nusually down to whether you're inserting in order or not. Inserting in \norder is tons faster regardless of the type, since it keeps the index \nunfragmented and doesn't cause page splits.\n\nOn 02/04/2013 22:52, Anne Rosset wrote:\n>\n> Hi,\n>\n> I have read a lot of different information about the benefits of using \n> numerical primary key vs alphanumerical primary key(small size). And \n> what I am gathering is that for performance there is no more great \n> advantage.\n>\n> It seems like now RDBMS in general, postgres in particular handles \n> pretty well joins on text indexes.\n>\n> Did I understand correctly?\n>\n> Thanks,\n>\n> Anne\n>\n\n\n\n\n\n\n\nThe biggest difference in performance\n between text and integer keys is usually down to whether you're\n inserting in order or not. Inserting in order is tons faster\n regardless of the type, since it keeps the index unfragmented and\n doesn't cause page splits.\n\n On 02/04/2013 22:52, Anne Rosset wrote:\n\n\n\n\n\n\nHi,\nI have read a lot of different information\n about the benefits of using numerical primary key vs\n alphanumerical primary key(small size). And what I am\n gathering is that for performance there is no more great\n advantage.\nIt seems like now RDBMS in general,\n postgres in particular handles pretty well joins on text\n indexes.\nDid I understand correctly?\nThanks,\nAnne",
"msg_date": "Tue, 12 Feb 2013 14:06:57 +0100",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numerical primary key vs alphanumerical primary key"
},
{
"msg_contents": "My experience has been that the performance advantage for numeric keys is primarily an Oracle thing. However, Oracle is popular enough for people to assume that it applies to databases in general.\n\nJulien Cigar <[email protected]> wrote:\n\n>The biggest difference in performance between text and integer keys is \n>usually down to whether you're inserting in order or not. Inserting in \n>order is tons faster regardless of the type, since it keeps the index \n>unfragmented and doesn't cause page splits.\n>\n>On 02/04/2013 22:52, Anne Rosset wrote:\n>>\n>> Hi,\n>>\n>> I have read a lot of different information about the benefits of\n>using \n>> numerical primary key vs alphanumerical primary key(small size). And \n>> what I am gathering is that for performance there is no more great \n>> advantage.\n>>\n>> It seems like now RDBMS in general, postgres in particular handles \n>> pretty well joins on text indexes.\n>>\n>> Did I understand correctly?\n>>\n>> Thanks,\n>>\n>> Anne\n>>\n\n-- \nSent from my Android phone with K-9 Mail. Please excuse my brevity.\nMy experience has been that the performance advantage for numeric keys is primarily an Oracle thing. However, Oracle is popular enough for people to assume that it applies to databases in general.Julien Cigar <[email protected]> wrote:\nThe biggest difference in performance\n between text and integer keys is usually down to whether you're\n inserting in order or not. Inserting in order is tons faster\n regardless of the type, since it keeps the index unfragmented and\n doesn't cause page splits.\n\n On 02/04/2013 22:52, Anne Rosset wrote:\n\n\n\n\nHi,\nI have read a lot of different information\n about the benefits of using numerical primary key vs\n alphanumerical primary key(small size). And what I am\n gathering is that for performance there is no more great\n advantage.\nIt seems like now RDBMS in general,\n postgres in particular handles pretty well joins on text\n indexes.\nDid I understand correctly?\nThanks,\nAnne\n\n\n\n\n-- \nSent from my Android phone with K-9 Mail. Please excuse my brevity.",
"msg_date": "Tue, 12 Feb 2013 08:05:29 -0700",
"msg_from": "Grant Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numerical primary key vs alphanumerical primary key"
},
{
"msg_contents": "For SQL Server, having a clustered index on a numeric incrementing key\nis much better than having a semi-random uuid primary key used as the\nclustered index itself.\n\nFlorent\n\nOn Tue, Feb 12, 2013 at 4:05 PM, Grant Johnson <[email protected]> wrote:\n> My experience has been that the performance advantage for numeric keys is\n> primarily an Oracle thing. However, Oracle is popular enough for people to\n> assume that it applies to databases in general.\n>\n>\n> Julien Cigar <[email protected]> wrote:\n>>\n>> The biggest difference in performance between text and integer keys is\n>> usually down to whether you're inserting in order or not. Inserting in order\n>> is tons faster regardless of the type, since it keeps the index unfragmented\n>> and doesn't cause page splits.\n>>\n>> On 02/04/2013 22:52, Anne Rosset wrote:\n>>\n>> Hi,\n>>\n>> I have read a lot of different information about the benefits of using\n>> numerical primary key vs alphanumerical primary key(small size). And what I\n>> am gathering is that for performance there is no more great advantage.\n>>\n>> It seems like now RDBMS in general, postgres in particular handles pretty\n>> well joins on text indexes.\n>>\n>> Did I understand correctly?\n>>\n>> Thanks,\n>>\n>> Anne\n>>\n>>\n>\n> --\n> Sent from my Android phone with K-9 Mail. Please excuse my brevity.\n\n\n\n--\nFlorent Guillaume, Director of R&D, Nuxeo\nOpen Source, Java EE based, Enterprise Content Management (ECM)\nhttp://www.nuxeo.com http://www.nuxeo.org +33 1 40 33 79 87\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Feb 2013 16:14:15 +0100",
"msg_from": "Florent Guillaume <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numerical primary key vs alphanumerical primary key"
},
{
"msg_contents": "On Tue, Feb 12, 2013 at 12:05 PM, Grant Johnson <[email protected]> wrote:\n> My experience has been that the performance advantage for numeric keys is\n> primarily an Oracle thing. However, Oracle is popular enough for people to\n> assume that it applies to databases in general.\n\nThe advantage in PG also exists, only tied to size. It's not really\nwhether it's numeric or not, but whether values are big or not. Int or\nother primitive types tend to be far faster to join because of their\nfixed, small size. If you have a varchar, and if you have big values\nfrom time to time, joining becomes heavy because the index is huge (it\nhas to contain the values).\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Feb 2013 13:26:11 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numerical primary key vs alphanumerical primary key"
}
] |
[
{
"msg_contents": "We upgraded from PG 9.1 to 9.2. Since the upgrade, the # of active queries has raised significantly, especially during our peak time where lots of users are logging in. According to New Relic, this query is now taking up the most amount of time during peak activity and my pg_stat_activity and slow log sampling agrees. We have 3 DB servers referenced here, production running 9.2.2, semi-idle (idle except for replication when I ran the test) running 9.2.2, and 9.1.3 completely idle with an old dump restored. \n\nHW: All servers have the same amount of RAM and CPU and all our indexes fit in RAM. HD's differ, but they're all SSD and since the explain analyze's are all indexes, I'm going to assume it doesn't mean that much in the context. \n\nConfiguration across all three servers is the same. \n\nTable was auto vacuumed today. \n\nIdle DB running 9.1.3 \n\nhttp://explain.depesz.com/s/7DZ \n\nSemi-Idle DB running 9.2.2 (transactions ran previously and replication happening, but no active traffic when query was ran) \n\nhttp://explain.depesz.com/s/fEE \n\nProduction DB running 9.2.2 (Incredibly Active) \n\nhttp://explain.depesz.com/s/qVW \n\n Table \"public.users\" \n Column | Type | Modifiers \n-------------------+--------------------------+----------------------------------------------------\n id | integer | not null default nextval('users_id_seq'::regclass)\n created_dt | timestamp with time zone | default now()\n last_login_dt | timestamp with time zone | default now()\n email | character varying | not null\n verified | boolean | default false\n first_name | character varying | \n last_name | character varying | \n username | character varying(50) | not null\n location | character varying | \n im_type | character varying | \n im_username | character varying | \n website | character varying | \n phone_mobile | character varying(30) | \n postal_code | character varying(10) | \n language_tag | character varying(7) | default 'en'::character varying\n settings | text | default ''::text\n country | character(2) | default ''::bpchar\n timezone | character varying(50) | not null default 'UTC'::character varying\n verify_hash | character varying | \n bio | character varying(160) | \n twitter | character varying | \n facebook | character varying | \n search_updated_dt | timestamp with time zone | not null default now()\n source | character varying | \n auto_verified | boolean | \n password | character(60) | \n google | character varying | \n gender | smallint | not null default 0\n birthdate | date | \n weibo | character varying | \nIndexes:\n \"users_pkey\" PRIMARY KEY, btree (id)\n \"u_auth_email_index\" UNIQUE, btree (lower(email::text))\n \"u_auth_uname_index\" UNIQUE, btree (lower(username::text))\n \"u_auth_verify_hash_idx\" UNIQUE, btree (verify_hash)\n \"users_search_updated_dt_idx\" btree (search_updated_dt DESC)\n\n\n-- \nWill Platnick\nSent with Sparrow (http://www.sparrowmailapp.com/?sig)\n\n\n\nWe upgraded from PG 9.1 to 9.2. Since the upgrade, the # of active queries has raised significantly, especially during our peak time where lots of users are logging in. According to New Relic, this query is now taking up the most amount of time during peak activity and my pg_stat_activity and slow log sampling agrees. We have 3 DB servers referenced here, production running 9.2.2, semi-idle (idle except for replication when I ran the test) running 9.2.2, and 9.1.3 completely idle with an old dump restored. \n\nHW: All servers have the same amount of RAM and CPU and all our indexes fit in RAM. HD's differ, but they're all SSD and since the explain analyze's are all indexes, I'm going to assume it doesn't mean that much in the context. \n\nConfiguration across all three servers is the same.\n\nTable was auto vacuumed today.\n\nIdle DB running 9.1.3\n\nhttp://explain.depesz.com/s/7DZ\n\nSemi-Idle DB running 9.2.2 (transactions ran previously and replication happening, but no active traffic when query was ran)\n\nhttp://explain.depesz.com/s/fEE\n\nProduction DB running 9.2.2 (Incredibly Active)\n\nhttp://explain.depesz.com/s/qVW\n\n Table \"public.users\"\n Column | Type | Modifiers \n-------------------+--------------------------+----------------------------------------------------\n id | integer | not null default nextval('users_id_seq'::regclass)\n created_dt | timestamp with time zone | default now()\n last_login_dt | timestamp with time zone | default now()\n email | character varying | not null\n verified | boolean | default false\n first_name | character varying | \n last_name | character varying | \n username | character varying(50) | not null\n location | character varying | \n im_type | character varying | \n im_username | character varying | \n website | character varying | \n phone_mobile | character varying(30) | \n postal_code | character varying(10) | \n language_tag | character varying(7) | default 'en'::character varying\n settings | text | default ''::text\n country | character(2) | default ''::bpchar\n timezone | character varying(50) | not null default 'UTC'::character varying\n verify_hash | character varying | \n bio | character varying(160) | \n twitter | character varying | \n facebook | character varying | \n search_updated_dt | timestamp with time zone | not null default now()\n source | character varying | \n auto_verified | boolean | \n password | character(60) | \n google | character varying | \n gender | smallint | not null default 0\n birthdate | date | \n weibo | character varying | \nIndexes:\n \"users_pkey\" PRIMARY KEY, btree (id)\n \"u_auth_email_index\" UNIQUE, btree (lower(email::text))\n \"u_auth_uname_index\" UNIQUE, btree (lower(username::text))\n \"u_auth_verify_hash_idx\" UNIQUE, btree (verify_hash)\n \"users_search_updated_dt_idx\" btree (search_updated_dt DESC)\n\n\n-- Will PlatnickSent with Sparrow",
"msg_date": "Mon, 4 Feb 2013 22:45:53 -0500",
"msg_from": "Will Platnick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Query Help"
},
{
"msg_contents": "On 05.02.2013 05:45, Will Platnick wrote:\n> We upgraded from PG 9.1 to 9.2. Since the upgrade, the # of active queries has raised significantly, especially during our peak time where lots of users are logging in. According to New Relic, this query is now taking up the most amount of time during peak activity and my pg_stat_activity and slow log sampling agrees. We have 3 DB servers referenced here, production running 9.2.2, semi-idle (idle except for replication when I ran the test) running 9.2.2, and 9.1.3 completely idle with an old dump restored.\n\nThe only thing that stands out is that it always checks both indexes for \nmatches. Since you only want a single row as a result, it seems like it \nwould be better to first check one index, and only check the other one \nif there's no match. Rewriting the query with UNION should do that:\n\nSELECT id, username, password, email, verified, timezone FROM users \nWHERE lower(username) = 'randomuser'\nUNION ALL\nSELECT id, username, password, email, verified, timezone FROM users \nWHERE lower(email) = 'randomuser'\nLIMIT 1;\n\nAlso, if you can assume that email addresses always contain the \n@-character, you could take advantage of that and only do the \nlower(email) = 'randomuser' search if there is one.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 05 Feb 2013 15:58:24 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query Help"
},
{
"msg_contents": "On Tue, Feb 5, 2013 at 9:15 AM, Will Platnick <[email protected]> wrote:\n> We upgraded from PG 9.1 to 9.2. Since the upgrade, the # of active queries\n> has raised significantly, especially during our peak time where lots of\n> users are logging in. According to New Relic, this query is now taking up\n> the most amount of time during peak activity and my pg_stat_activity and\n> slow log sampling agrees. We have 3 DB servers referenced here, production\n> running 9.2.2, semi-idle (idle except for replication when I ran the test)\n> running 9.2.2, and 9.1.3 completely idle with an old dump restored.\n>\n\nThe only thing that stands out is: on your production server I see\n\"Total runtime: 7.515 ms\", but the top node in EXPLAIN ANAYZE shows\nactual time as 0.179 ms. Not sure where that additional time is being\nspent though. It could be ExecutorStart/End, but have no idea why they\nshould take so long.\n\nThanks,\nPavan\n-- \nPavan Deolasee\nhttp://www.linkedin.com/in/pavandeolasee\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Feb 2013 14:21:14 +0530",
"msg_from": "Pavan Deolasee <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query Help"
},
{
"msg_contents": "Good eye, I totally missed that! \n\nAny ideas on how to troubleshoot this delay? \n\n\n\nOn Wednesday, February 6, 2013 at 3:51 AM, Pavan Deolasee wrote:\n\n> On Tue, Feb 5, 2013 at 9:15 AM, Will Platnick <[email protected] (mailto:[email protected])> wrote:\n> > We upgraded from PG 9.1 to 9.2. Since the upgrade, the # of active queries\n> > has raised significantly, especially during our peak time where lots of\n> > users are logging in. According to New Relic, this query is now taking up\n> > the most amount of time during peak activity and my pg_stat_activity and\n> > slow log sampling agrees. We have 3 DB servers referenced here, production\n> > running 9.2.2, semi-idle (idle except for replication when I ran the test)\n> > running 9.2.2, and 9.1.3 completely idle with an old dump restored.\n> > \n> \n> \n> The only thing that stands out is: on your production server I see\n> \"Total runtime: 7.515 ms\", but the top node in EXPLAIN ANAYZE shows\n> actual time as 0.179 ms. Not sure where that additional time is being\n> spent though. It could be ExecutorStart/End, but have no idea why they\n> should take so long.\n> \n> Thanks,\n> Pavan\n> -- \n> Pavan Deolasee\n> http://www.linkedin.com/in/pavandeolasee\n> \n> \n\n\n\nGood eye, I totally missed that!\n Any ideas on how to troubleshoot this delay?\n\nOn Wednesday, February 6, 2013 at 3:51 AM, Pavan Deolasee wrote:\n\nOn Tue, Feb 5, 2013 at 9:15 AM, Will Platnick <[email protected]> wrote:We upgraded from PG 9.1 to 9.2. Since the upgrade, the # of active querieshas raised significantly, especially during our peak time where lots ofusers are logging in. According to New Relic, this query is now taking upthe most amount of time during peak activity and my pg_stat_activity andslow log sampling agrees. We have 3 DB servers referenced here, productionrunning 9.2.2, semi-idle (idle except for replication when I ran the test)running 9.2.2, and 9.1.3 completely idle with an old dump restored.The only thing that stands out is: on your production server I see\"Total runtime: 7.515 ms\", but the top node in EXPLAIN ANAYZE showsactual time as 0.179 ms. Not sure where that additional time is beingspent though. It could be ExecutorStart/End, but have no idea why theyshould take so long.Thanks,Pavan-- Pavan Deolaseehttp://www.linkedin.com/in/pavandeolasee",
"msg_date": "Wed, 6 Feb 2013 08:17:50 -0500",
"msg_from": "Will Platnick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query Help"
},
{
"msg_contents": "Will Platnick <[email protected]> wrote:\n> Will Platnick <[email protected]> wrote:\n\n>> The only thing that stands out is: on your production server I see\n>> \"Total runtime: 7.515 ms\", but the top node in EXPLAIN ANAYZE shows\n>> actual time as 0.179 ms. Not sure where that additional time is being\n>> spent though. It could be ExecutorStart/End, but have no idea why they\n>> should take so long.\n\n> Any ideas on how to troubleshoot this delay?\n\nIs the client which is running the query on the same machine as the\nserver? If not, what's the ping time between them?\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Feb 2013 08:22:40 -0800 (PST)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query Help"
},
{
"msg_contents": "Clients are technically our pgbouncer which is on the same machine. The explain analyze was local through psql direct to postgresql.\n\n\nOn Wednesday, February 6, 2013 at 11:22 AM, Kevin Grittner wrote:\n\n> Will Platnick <[email protected] (mailto:[email protected])> wrote:\n> > Will Platnick <[email protected] (mailto:[email protected])> wrote:\n> \n> \n> > > The only thing that stands out is: on your production server I see\n> > > \"Total runtime: 7.515 ms\", but the top node in EXPLAIN ANAYZE shows\n> > > actual time as 0.179 ms. Not sure where that additional time is being\n> > > spent though. It could be ExecutorStart/End, but have no idea why they\n> > > should take so long.\n> > > \n> > \n> \n> \n> > Any ideas on how to troubleshoot this delay?\n> \n> Is the client which is running the query on the same machine as the\n> server? If not, what's the ping time between them?\n> \n> -Kevin \n\n\nClients are technically our pgbouncer which is on the same machine. The explain analyze was local through psql direct to postgresql.\nOn Wednesday, February 6, 2013 at 11:22 AM, Kevin Grittner wrote:\n\nWill Platnick <[email protected]> wrote:Will Platnick <[email protected]> wrote:The only thing that stands out is: on your production server I see\"Total runtime: 7.515 ms\", but the top node in EXPLAIN ANAYZE showsactual time as 0.179 ms. Not sure where that additional time is beingspent though. It could be ExecutorStart/End, but have no idea why theyshould take so long.Any ideas on how to troubleshoot this delay?Is the client which is running the query on the same machine as theserver? If not, what's the ping time between them?-Kevin",
"msg_date": "Wed, 6 Feb 2013 11:24:34 -0500",
"msg_from": "Will Platnick <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query Help"
},
{
"msg_contents": "On Wed, Feb 6, 2013 at 9:52 PM, Kevin Grittner <[email protected]> wrote:\n> Will Platnick <[email protected]> wrote:\n>> Will Platnick <[email protected]> wrote:\n>\n>>> The only thing that stands out is: on your production server I see\n>>> \"Total runtime: 7.515 ms\", but the top node in EXPLAIN ANAYZE shows\n>>> actual time as 0.179 ms. Not sure where that additional time is being\n>>> spent though. It could be ExecutorStart/End, but have no idea why they\n>>> should take so long.\n>\n>> Any ideas on how to troubleshoot this delay?\n>\n> Is the client which is running the query on the same machine as the\n> server? If not, what's the ping time between them?\n>\n\nI don't think the network latency can cause that. The \"Total runtime\"\nis calculated on the server side itself - see ExplainOnePlan().\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nhttp://www.linkedin.com/in/pavandeolasee\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Feb 2013 21:57:37 +0530",
"msg_from": "Pavan Deolasee <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query Help"
}
] |
[
{
"msg_contents": "Server specs:\nDell R610\ndual E5645 hex-core 2.4GHz\n192GB RAM\nRAID 1: 2x400GB SSD (OS + WAL logs)\nRAID 10: 4x400GB SSD (/var/lib/pgsql)\n\n\n/etc/sysctl.conf:\nkernel.msgmnb = 65536\nkernel.msgmax = 65536\nkernel.shmmax = 68719476736\nkernel.shmall = 4294967296\nvm.overcommit_memory = 0\nvm.swappiness = 0\nvm.dirty_background_bytes = 536870912\nvm.dirty_bytes = 536870912\n\n\npostgresql.conf:\nlisten_addresses = '*' # what IP address(es) to listen on;\nmax_connections = 150 # (change requires restart)\nshared_buffers = 48GB # min 128kB\nwork_mem = 1310MB # min 64kB\nmaintenance_work_mem = 24GB # min 1MB\nwal_level = hot_standby # minimal, archive, or hot_standby\ncheckpoint_segments = 64 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30min # range 30s-1h\ncheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\nmax_wal_senders = 5 # max number of walsender processes\nwal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\nhot_standby = on # \"on\" allows queries during recovery\nmax_standby_archive_delay = 120s # max delay before canceling queries\nmax_standby_streaming_delay = 120s # max delay before canceling queries\neffective_cache_size = 120GB\nconstraint_exclusion = partition # on, off, or partition\nlog_destination = 'syslog' # Valid values are combinations of\nlogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' # directory where log files are written,\nlog_filename = 'postgresql-%a.log' # log file name pattern,\nlog_truncate_on_rotation = on # If on, an existing log file with the\nlog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_rotation_size = 0 # Automatic rotation of logfiles will\nlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\nlog_checkpoints = on\nlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:\nlog_lock_waits = on # log lock waits >= deadlock_timeout\nautovacuum = on # Enable autovacuum subprocess? 'on'\nlog_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\nautovacuum_max_workers = 5 # max number of autovacuum subprocesses\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\ndeadlock_timeout = 300ms\n\n\nper pgtune:\n#------------------------------------------------------------------------------\n# pgtune wizard run on 2013-02-05\n# Based on 198333224 KB RAM in the server\n#------------------------------------------------------------------------------\ndefault_statistics_target = 100\nmaintenance_work_mem = 1GB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 128GB\nwork_mem = 1152MB\nwal_buffers = 8MB\ncheckpoint_segments = 16\nshared_buffers = 44GB\nmax_connections = 80\n\nWe use pgbouncer (set to 140 connections) in transaction pooling mode in\nfront of our db.\n\n\nThe problem:\n\nFor the most part, the server hums along. No other applications run on this\nserver other than postgres. Load averages rarely break 2.0, it never swaps,\nand %iowait is usually not more than 0.12\n\nBut periodically, there are spikes in our app's db response time. Normally,\nthe app's db response time hovers in the 100ms range for most of the day.\nDuring the spike times, it can go up to 1000ms or 1500ms, and the number of\npg connections goes to 140 (maxed out to pgbouncer's limit, where normally\nit's only about 20-40 connections). Also, during these times, which usually\nlast less than 2 minutes, we will see several thousand queries in the pg\nlog (this is with log_min_duration_statement = 500), compared to maybe one\nor two dozen 500ms+ queries in non-spike times.\n\nInbetween spikes could be an hour, two hours, sometimes half a day. There\ndoesn't appear to be any pattern that we can see:\n* there are no obvious queries that are locking the db\n* it doesn't necessarily happen during high-traffic times, though it can\n* it doesn't happen during any known system, db, or app regularly-scheduled\njob, including crons\n* in general, there's no discernible regularity to it at all\n* it doesn't coincide with checkpoint starts or completions\n* it doesn't coincide with autovacuums\n* there are no messages in any system logs that might indicate any system\nor hardware-related issue\n\nBesides spikes in our graphs, the only other visible effect is that %system\nin sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all\nother sar variables remain the same).\n\nAnd according to our monitoring system, web requests get queued up, and our\nalerting system sometimes either says there's a timeout or that it had\nmultiple web response times greater than 300ms, and so we suspect (but have\nno proof) that some users will see either a long hang or possibly a\ntimeout. But since it's almost always less than two minutes, and sometimes\nless than one, we don't really hear any complaints (guessing that most\npeople hit reload, and things work again, so they continue on), and we\nhaven't been able to see any negative effect ourselves.\n\nBut we want to get in front of the problem, in case it is something that\nwill get worse as traffic continues to grow. We've tweaked various configs\non the OS side as well as the postgresql.conf side. What's posted above is\nour current setup, and the problem persists.\n\nAny ideas as to where we could even look?\n\nAlso, whether related or unrelated to the spikes, are there any\nrecommendations for our postgresql.conf or sysctl.conf based on our\nhardware? From pgtune's output, I am lowering maintenance_work_mem from\n24GB down to maybe 2GB, but I keep reading conflicting things about other\nsettings, such as checkpoints or max_connections.\n\njohnny\n\nServer specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)\n/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296\nvm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:\nlisten_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)\nshared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MB\nwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\nmax_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\nhot_standby = on # \"on\" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queries\nmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partition\nlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,\nlog_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\nlog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeout\nautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\nautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300ms\nper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server\n#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\neffective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.\nThe problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12\nBut periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.\nInbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db\n* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons\n* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issue\nBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).\nAnd according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.\nBut we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.\nAny ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.\njohnny",
"msg_date": "Tue, 5 Feb 2013 17:02:21 -0500",
"msg_from": "Johnny Tan <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql.conf recommendations"
},
{
"msg_contents": "Just out of curiosity, are you using transparent huge pages?\nOn Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\n\n> Server specs:\n> Dell R610\n> dual E5645 hex-core 2.4GHz\n> 192GB RAM\n> RAID 1: 2x400GB SSD (OS + WAL logs)\n> RAID 10: 4x400GB SSD (/var/lib/pgsql)\n>\n>\n> /etc/sysctl.conf:\n> kernel.msgmnb = 65536\n> kernel.msgmax = 65536\n> kernel.shmmax = 68719476736\n> kernel.shmall = 4294967296\n> vm.overcommit_memory = 0\n> vm.swappiness = 0\n> vm.dirty_background_bytes = 536870912\n> vm.dirty_bytes = 536870912\n>\n>\n> postgresql.conf:\n> listen_addresses = '*' # what IP address(es) to listen on;\n> max_connections = 150 # (change requires restart)\n> shared_buffers = 48GB # min 128kB\n> work_mem = 1310MB # min 64kB\n> maintenance_work_mem = 24GB # min 1MB\n> wal_level = hot_standby # minimal, archive, or hot_standby\n> checkpoint_segments = 64 # in logfile segments, min 1, 16MB each\n> checkpoint_timeout = 30min # range 30s-1h\n> checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\n> max_wal_senders = 5 # max number of walsender processes\n> wal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\n> hot_standby = on # \"on\" allows queries during recovery\n> max_standby_archive_delay = 120s # max delay before canceling queries\n> max_standby_streaming_delay = 120s # max delay before canceling queries\n> effective_cache_size = 120GB\n> constraint_exclusion = partition # on, off, or partition\n> log_destination = 'syslog' # Valid values are combinations of\n> logging_collector = on # Enable capturing of stderr and csvlog\n> log_directory = 'pg_log' # directory where log files are written,\n> log_filename = 'postgresql-%a.log' # log file name pattern,\n> log_truncate_on_rotation = on # If on, an existing log file with the\n> log_rotation_age = 1d # Automatic rotation of logfiles will\n> log_rotation_size = 0 # Automatic rotation of logfiles will\n> log_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\n> log_checkpoints = on\n> log_line_prefix = 'user=%u db=%d remote=%r ' # special values:\n> log_lock_waits = on # log lock waits >= deadlock_timeout\n> autovacuum = on # Enable autovacuum subprocess? 'on'\n> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\n> autovacuum_max_workers = 5 # max number of autovacuum subprocesses\n> datestyle = 'iso, mdy'\n> lc_messages = 'en_US.UTF-8' # locale for system error message\n> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n> lc_time = 'en_US.UTF-8' # locale for time formatting\n> default_text_search_config = 'pg_catalog.english'\n> deadlock_timeout = 300ms\n>\n>\n> per pgtune:\n>\n> #------------------------------------------------------------------------------\n> # pgtune wizard run on 2013-02-05\n> # Based on 198333224 KB RAM in the server\n>\n> #------------------------------------------------------------------------------\n> default_statistics_target = 100\n> maintenance_work_mem = 1GB\n> checkpoint_completion_target = 0.9\n> effective_cache_size = 128GB\n> work_mem = 1152MB\n> wal_buffers = 8MB\n> checkpoint_segments = 16\n> shared_buffers = 44GB\n> max_connections = 80\n>\n> We use pgbouncer (set to 140 connections) in transaction pooling mode in\n> front of our db.\n>\n>\n> The problem:\n>\n> For the most part, the server hums along. No other applications run on\n> this server other than postgres. Load averages rarely break 2.0, it never\n> swaps, and %iowait is usually not more than 0.12\n>\n> But periodically, there are spikes in our app's db response time.\n> Normally, the app's db response time hovers in the 100ms range for most of\n> the day. During the spike times, it can go up to 1000ms or 1500ms, and the\n> number of pg connections goes to 140 (maxed out to pgbouncer's limit, where\n> normally it's only about 20-40 connections). Also, during these times,\n> which usually last less than 2 minutes, we will see several thousand\n> queries in the pg log (this is with log_min_duration_statement = 500),\n> compared to maybe one or two dozen 500ms+ queries in non-spike times.\n>\n> Inbetween spikes could be an hour, two hours, sometimes half a day. There\n> doesn't appear to be any pattern that we can see:\n> * there are no obvious queries that are locking the db\n> * it doesn't necessarily happen during high-traffic times, though it can\n> * it doesn't happen during any known system, db, or app\n> regularly-scheduled job, including crons\n> * in general, there's no discernible regularity to it at all\n> * it doesn't coincide with checkpoint starts or completions\n> * it doesn't coincide with autovacuums\n> * there are no messages in any system logs that might indicate any system\n> or hardware-related issue\n>\n> Besides spikes in our graphs, the only other visible effect is that\n> %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait\n> and all other sar variables remain the same).\n>\n> And according to our monitoring system, web requests get queued up, and\n> our alerting system sometimes either says there's a timeout or that it had\n> multiple web response times greater than 300ms, and so we suspect (but have\n> no proof) that some users will see either a long hang or possibly a\n> timeout. But since it's almost always less than two minutes, and sometimes\n> less than one, we don't really hear any complaints (guessing that most\n> people hit reload, and things work again, so they continue on), and we\n> haven't been able to see any negative effect ourselves.\n>\n> But we want to get in front of the problem, in case it is something that\n> will get worse as traffic continues to grow. We've tweaked various configs\n> on the OS side as well as the postgresql.conf side. What's posted above is\n> our current setup, and the problem persists.\n>\n> Any ideas as to where we could even look?\n>\n> Also, whether related or unrelated to the spikes, are there any\n> recommendations for our postgresql.conf or sysctl.conf based on our\n> hardware? From pgtune's output, I am lowering maintenance_work_mem from\n> 24GB down to maybe 2GB, but I keep reading conflicting things about other\n> settings, such as checkpoints or max_connections.\n>\n> johnny\n>\n>\n\nJust out of curiosity, are you using transparent huge pages?\nOn Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\nServer specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)\n/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296\nvm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:\nlisten_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)\nshared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MB\nwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\nmax_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\nhot_standby = on # \"on\" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queries\nmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partition\nlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,\nlog_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\nlog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeout\nautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\nautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300ms\nper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server\n#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\n\neffective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.\nThe problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12\nBut periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.\nInbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db\n* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons\n* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issue\nBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).\nAnd according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.\nBut we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.\nAny ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.\njohnny",
"msg_date": "Tue, 5 Feb 2013 17:37:21 -0500",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag\n[always] never\n\n\nOn Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <[email protected]> wrote:\n\n> Just out of curiosity, are you using transparent huge pages?\n> On Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\n>\n>> Server specs:\n>> Dell R610\n>> dual E5645 hex-core 2.4GHz\n>> 192GB RAM\n>> RAID 1: 2x400GB SSD (OS + WAL logs)\n>> RAID 10: 4x400GB SSD (/var/lib/pgsql)\n>>\n>>\n>> /etc/sysctl.conf:\n>> kernel.msgmnb = 65536\n>> kernel.msgmax = 65536\n>> kernel.shmmax = 68719476736\n>> kernel.shmall = 4294967296\n>> vm.overcommit_memory = 0\n>> vm.swappiness = 0\n>> vm.dirty_background_bytes = 536870912\n>> vm.dirty_bytes = 536870912\n>>\n>>\n>> postgresql.conf:\n>> listen_addresses = '*' # what IP address(es) to listen on;\n>> max_connections = 150 # (change requires restart)\n>> shared_buffers = 48GB # min 128kB\n>> work_mem = 1310MB # min 64kB\n>> maintenance_work_mem = 24GB # min 1MB\n>> wal_level = hot_standby # minimal, archive, or hot_standby\n>> checkpoint_segments = 64 # in logfile segments, min 1, 16MB each\n>> checkpoint_timeout = 30min # range 30s-1h\n>> checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 -\n>> 1.0\n>> max_wal_senders = 5 # max number of walsender processes\n>> wal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\n>> hot_standby = on # \"on\" allows queries during recovery\n>> max_standby_archive_delay = 120s # max delay before canceling queries\n>> max_standby_streaming_delay = 120s # max delay before canceling queries\n>> effective_cache_size = 120GB\n>> constraint_exclusion = partition # on, off, or partition\n>> log_destination = 'syslog' # Valid values are combinations of\n>> logging_collector = on # Enable capturing of stderr and csvlog\n>> log_directory = 'pg_log' # directory where log files are written,\n>> log_filename = 'postgresql-%a.log' # log file name pattern,\n>> log_truncate_on_rotation = on # If on, an existing log file with the\n>> log_rotation_age = 1d # Automatic rotation of logfiles will\n>> log_rotation_size = 0 # Automatic rotation of logfiles will\n>> log_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\n>> log_checkpoints = on\n>> log_line_prefix = 'user=%u db=%d remote=%r ' # special values:\n>> log_lock_waits = on # log lock waits >= deadlock_timeout\n>> autovacuum = on # Enable autovacuum subprocess? 'on'\n>> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\n>> autovacuum_max_workers = 5 # max number of autovacuum subprocesses\n>> datestyle = 'iso, mdy'\n>> lc_messages = 'en_US.UTF-8' # locale for system error message\n>> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n>> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n>> lc_time = 'en_US.UTF-8' # locale for time formatting\n>> default_text_search_config = 'pg_catalog.english'\n>> deadlock_timeout = 300ms\n>>\n>>\n>> per pgtune:\n>>\n>> #------------------------------------------------------------------------------\n>> # pgtune wizard run on 2013-02-05\n>> # Based on 198333224 KB RAM in the server\n>>\n>> #------------------------------------------------------------------------------\n>> default_statistics_target = 100\n>> maintenance_work_mem = 1GB\n>> checkpoint_completion_target = 0.9\n>> effective_cache_size = 128GB\n>> work_mem = 1152MB\n>> wal_buffers = 8MB\n>> checkpoint_segments = 16\n>> shared_buffers = 44GB\n>> max_connections = 80\n>>\n>> We use pgbouncer (set to 140 connections) in transaction pooling mode in\n>> front of our db.\n>>\n>>\n>> The problem:\n>>\n>> For the most part, the server hums along. No other applications run on\n>> this server other than postgres. Load averages rarely break 2.0, it never\n>> swaps, and %iowait is usually not more than 0.12\n>>\n>> But periodically, there are spikes in our app's db response time.\n>> Normally, the app's db response time hovers in the 100ms range for most of\n>> the day. During the spike times, it can go up to 1000ms or 1500ms, and the\n>> number of pg connections goes to 140 (maxed out to pgbouncer's limit, where\n>> normally it's only about 20-40 connections). Also, during these times,\n>> which usually last less than 2 minutes, we will see several thousand\n>> queries in the pg log (this is with log_min_duration_statement = 500),\n>> compared to maybe one or two dozen 500ms+ queries in non-spike times.\n>>\n>> Inbetween spikes could be an hour, two hours, sometimes half a day. There\n>> doesn't appear to be any pattern that we can see:\n>> * there are no obvious queries that are locking the db\n>> * it doesn't necessarily happen during high-traffic times, though it can\n>> * it doesn't happen during any known system, db, or app\n>> regularly-scheduled job, including crons\n>> * in general, there's no discernible regularity to it at all\n>> * it doesn't coincide with checkpoint starts or completions\n>> * it doesn't coincide with autovacuums\n>> * there are no messages in any system logs that might indicate any system\n>> or hardware-related issue\n>>\n>> Besides spikes in our graphs, the only other visible effect is that\n>> %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait\n>> and all other sar variables remain the same).\n>>\n>> And according to our monitoring system, web requests get queued up, and\n>> our alerting system sometimes either says there's a timeout or that it had\n>> multiple web response times greater than 300ms, and so we suspect (but have\n>> no proof) that some users will see either a long hang or possibly a\n>> timeout. But since it's almost always less than two minutes, and sometimes\n>> less than one, we don't really hear any complaints (guessing that most\n>> people hit reload, and things work again, so they continue on), and we\n>> haven't been able to see any negative effect ourselves.\n>>\n>> But we want to get in front of the problem, in case it is something that\n>> will get worse as traffic continues to grow. We've tweaked various configs\n>> on the OS side as well as the postgresql.conf side. What's posted above is\n>> our current setup, and the problem persists.\n>>\n>> Any ideas as to where we could even look?\n>>\n>> Also, whether related or unrelated to the spikes, are there any\n>> recommendations for our postgresql.conf or sysctl.conf based on our\n>> hardware? From pgtune's output, I am lowering maintenance_work_mem from\n>> 24GB down to maybe 2GB, but I keep reading conflicting things about other\n>> settings, such as checkpoints or max_connections.\n>>\n>> johnny\n>>\n>>\n\n# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag [always] neverOn Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <[email protected]> wrote:\nJust out of curiosity, are you using transparent huge pages?\nOn Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\nServer specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)\n/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296\nvm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:\nlisten_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)\nshared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MB\nwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\nmax_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\nhot_standby = on # \"on\" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queries\nmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partition\nlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,\nlog_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\nlog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeout\nautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\nautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300ms\nper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server\n#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\n\n\neffective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.\nThe problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12\nBut periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.\nInbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db\n* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons\n* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issue\nBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).\nAnd according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.\nBut we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.\nAny ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.\njohnny",
"msg_date": "Tue, 5 Feb 2013 18:46:37 -0500",
"msg_from": "Johnny Tan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "I've been looking into something on our system that sounds similar to what\nyou're seeing. I'm still researching it, but I'm suspecting the memory\ncompaction that runs as part of transparent huge pages when memory is\nallocated... yet to be proven. The tunable you mentioned controls the\ncompaction process that runs at allocation time so it can try to allocate\nlarge pages, there's a separate one that controls if the compaction is done\nin khugepaged, and a separate one that controls whether THP is used at all\nor not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different\nin your distro)\n\nWhat's the output of this command?\negrep 'trans|thp|compact_' /proc/vmstat\ncompact_stall represents the number of processes that were stalled to do a\ncompaction, the other metrics have to do with other parts of THP. If you\nsee compact_stall climbing, from what I can tell those might be causing\nyour spikes. I haven't found a way of telling how long the processes have\nbeen stalled. You could probably get a little more insight into the\nprocesses with some tracing assuming you can catch it quickly enough.\nRunning perf top will also show the compaction happening but that doesn't\nnecessarily mean it's impacting your running processes.\n\n\n\n\nOn Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <[email protected]> wrote:\n\n> # cat /sys/kernel/mm/redhat_transparent_hugepage/defrag\n> [always] never\n>\n>\n> On Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <[email protected]> wrote:\n>\n>> Just out of curiosity, are you using transparent huge pages?\n>> On Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\n>>\n>>> Server specs:\n>>> Dell R610\n>>> dual E5645 hex-core 2.4GHz\n>>> 192GB RAM\n>>> RAID 1: 2x400GB SSD (OS + WAL logs)\n>>> RAID 10: 4x400GB SSD (/var/lib/pgsql)\n>>>\n>>>\n>>> /etc/sysctl.conf:\n>>> kernel.msgmnb = 65536\n>>> kernel.msgmax = 65536\n>>> kernel.shmmax = 68719476736\n>>> kernel.shmall = 4294967296\n>>> vm.overcommit_memory = 0\n>>> vm.swappiness = 0\n>>> vm.dirty_background_bytes = 536870912\n>>> vm.dirty_bytes = 536870912\n>>>\n>>>\n>>> postgresql.conf:\n>>> listen_addresses = '*' # what IP address(es) to listen on;\n>>> max_connections = 150 # (change requires restart)\n>>> shared_buffers = 48GB # min 128kB\n>>> work_mem = 1310MB # min 64kB\n>>> maintenance_work_mem = 24GB # min 1MB\n>>> wal_level = hot_standby # minimal, archive, or hot_standby\n>>> checkpoint_segments = 64 # in logfile segments, min 1, 16MB each\n>>> checkpoint_timeout = 30min # range 30s-1h\n>>> checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 -\n>>> 1.0\n>>> max_wal_senders = 5 # max number of walsender processes\n>>> wal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\n>>> hot_standby = on # \"on\" allows queries during recovery\n>>> max_standby_archive_delay = 120s # max delay before canceling queries\n>>> max_standby_streaming_delay = 120s # max delay before canceling queries\n>>> effective_cache_size = 120GB\n>>> constraint_exclusion = partition # on, off, or partition\n>>> log_destination = 'syslog' # Valid values are combinations of\n>>> logging_collector = on # Enable capturing of stderr and csvlog\n>>> log_directory = 'pg_log' # directory where log files are written,\n>>> log_filename = 'postgresql-%a.log' # log file name pattern,\n>>> log_truncate_on_rotation = on # If on, an existing log file with the\n>>> log_rotation_age = 1d # Automatic rotation of logfiles will\n>>> log_rotation_size = 0 # Automatic rotation of logfiles will\n>>> log_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\n>>> log_checkpoints = on\n>>> log_line_prefix = 'user=%u db=%d remote=%r ' # special values:\n>>> log_lock_waits = on # log lock waits >= deadlock_timeout\n>>> autovacuum = on # Enable autovacuum subprocess? 'on'\n>>> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\n>>> autovacuum_max_workers = 5 # max number of autovacuum subprocesses\n>>> datestyle = 'iso, mdy'\n>>> lc_messages = 'en_US.UTF-8' # locale for system error message\n>>> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n>>> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n>>> lc_time = 'en_US.UTF-8' # locale for time formatting\n>>> default_text_search_config = 'pg_catalog.english'\n>>> deadlock_timeout = 300ms\n>>>\n>>>\n>>> per pgtune:\n>>>\n>>> #------------------------------------------------------------------------------\n>>> # pgtune wizard run on 2013-02-05\n>>> # Based on 198333224 KB RAM in the server\n>>>\n>>> #------------------------------------------------------------------------------\n>>> default_statistics_target = 100\n>>> maintenance_work_mem = 1GB\n>>> checkpoint_completion_target = 0.9\n>>> effective_cache_size = 128GB\n>>> work_mem = 1152MB\n>>> wal_buffers = 8MB\n>>> checkpoint_segments = 16\n>>> shared_buffers = 44GB\n>>> max_connections = 80\n>>>\n>>> We use pgbouncer (set to 140 connections) in transaction pooling mode in\n>>> front of our db.\n>>>\n>>>\n>>> The problem:\n>>>\n>>> For the most part, the server hums along. No other applications run on\n>>> this server other than postgres. Load averages rarely break 2.0, it never\n>>> swaps, and %iowait is usually not more than 0.12\n>>>\n>>> But periodically, there are spikes in our app's db response time.\n>>> Normally, the app's db response time hovers in the 100ms range for most of\n>>> the day. During the spike times, it can go up to 1000ms or 1500ms, and the\n>>> number of pg connections goes to 140 (maxed out to pgbouncer's limit, where\n>>> normally it's only about 20-40 connections). Also, during these times,\n>>> which usually last less than 2 minutes, we will see several thousand\n>>> queries in the pg log (this is with log_min_duration_statement = 500),\n>>> compared to maybe one or two dozen 500ms+ queries in non-spike times.\n>>>\n>>> Inbetween spikes could be an hour, two hours, sometimes half a day.\n>>> There doesn't appear to be any pattern that we can see:\n>>> * there are no obvious queries that are locking the db\n>>> * it doesn't necessarily happen during high-traffic times, though it can\n>>> * it doesn't happen during any known system, db, or app\n>>> regularly-scheduled job, including crons\n>>> * in general, there's no discernible regularity to it at all\n>>> * it doesn't coincide with checkpoint starts or completions\n>>> * it doesn't coincide with autovacuums\n>>> * there are no messages in any system logs that might indicate any\n>>> system or hardware-related issue\n>>>\n>>> Besides spikes in our graphs, the only other visible effect is that\n>>> %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait\n>>> and all other sar variables remain the same).\n>>>\n>>> And according to our monitoring system, web requests get queued up, and\n>>> our alerting system sometimes either says there's a timeout or that it had\n>>> multiple web response times greater than 300ms, and so we suspect (but have\n>>> no proof) that some users will see either a long hang or possibly a\n>>> timeout. But since it's almost always less than two minutes, and sometimes\n>>> less than one, we don't really hear any complaints (guessing that most\n>>> people hit reload, and things work again, so they continue on), and we\n>>> haven't been able to see any negative effect ourselves.\n>>>\n>>> But we want to get in front of the problem, in case it is something that\n>>> will get worse as traffic continues to grow. We've tweaked various configs\n>>> on the OS side as well as the postgresql.conf side. What's posted above is\n>>> our current setup, and the problem persists.\n>>>\n>>> Any ideas as to where we could even look?\n>>>\n>>> Also, whether related or unrelated to the spikes, are there any\n>>> recommendations for our postgresql.conf or sysctl.conf based on our\n>>> hardware? From pgtune's output, I am lowering maintenance_work_mem from\n>>> 24GB down to maybe 2GB, but I keep reading conflicting things about other\n>>> settings, such as checkpoints or max_connections.\n>>>\n>>> johnny\n>>>\n>>>\n>\n\nI've been looking into something on our system that sounds similar to what you're seeing. I'm still researching it, but I'm suspecting the memory compaction that runs as part of transparent huge pages when memory is allocated... yet to be proven. The tunable you mentioned controls the compaction process that runs at allocation time so it can try to allocate large pages, there's a separate \none that controls if the compaction is done in khugepaged, and a separate \none that controls whether THP is used at all or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different in your distro)What's the output of this command?egrep 'trans|thp|compact_' /proc/vmstat\ncompact_stall represents the number of processes that were stalled to do a compaction, the other metrics have to do with other parts of THP. If you see compact_stall climbing, from what I can tell those might be causing your spikes. I haven't found a way of telling how long the processes have been stalled. You could probably get a little more insight into the processes with some tracing assuming you can catch it quickly enough. Running perf top will also show the compaction happening but that doesn't necessarily mean it's impacting your running processes.\nOn Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <[email protected]> wrote:\n# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag [always] never\nOn Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <[email protected]> wrote:\nJust out of curiosity, are you using transparent huge pages?\nOn Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\nServer specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)\n/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296\nvm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:\nlisten_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)\nshared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MB\nwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\nmax_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\nhot_standby = on # \"on\" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queries\nmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partition\nlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,\nlog_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\nlog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeout\nautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\nautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300ms\nper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server\n#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\n\n\n\n\neffective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.\nThe problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12\nBut periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.\nInbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db\n* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons\n* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issue\nBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).\nAnd according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.\nBut we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.\nAny ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.\njohnny",
"msg_date": "Tue, 5 Feb 2013 23:23:35 -0500",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Wed, Feb 6, 2013 at 3:32 AM, Johnny Tan <[email protected]> wrote:\n>\n> maintenance_work_mem = 24GB # min 1MB\n\nI'm quite astonished by this setting. Not that it explains the problem\nat hand, but I wonder if this is a plain mistake in configuration.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nhttp://www.linkedin.com/in/pavandeolasee\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Feb 2013 11:30:06 +0530",
"msg_from": "Pavan Deolasee <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On 6 Feb 2013, at 12:23 PM, Josh Krupka wrote:\n\n> On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <[email protected]> wrote:\n> shared_buffers = 48GB\t\t\t# min 128kB\n> \n> \n\n\nHi,\n\nFrom the postgresql.conf, I can see that the shared_buffers is set to 48GB which is not small, it would be possible that the large buffer cache could be \"dirty\", when a checkpoint starts, it would cause a checkpoint I/O spike.\n\nI would like to suggest you about using pgtune to get recommended conf for postgresql.\n\nRegards\n\n\nOn 6 Feb 2013, at 12:23 PM, Josh Krupka wrote:On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <[email protected]> wrote:\nshared_buffers = 48GB # min 128kB\n\nHi,From the postgresql.conf, I can see that the shared_buffers is set to 48GB which is not small, it would be possible that the large buffer cache could be \"dirty\", when a checkpoint starts, it would cause a checkpoint I/O spike.I would like to suggest you about using pgtune to get recommended conf for postgresql.Regards",
"msg_date": "Wed, 6 Feb 2013 14:18:29 +0800",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> wrote:\n> Johnny Tan <[email protected]> wrote:\n\n>>shared_buffers = 48GB# min 128kB\n\n> From the postgresql.conf, I can see that the shared_buffers is\n> set to 48GB which is not small, it would be possible that the\n> large buffer cache could be \"dirty\", when a checkpoint starts, it\n> would cause a checkpoint I/O spike.\n>\n>\n> I would like to suggest you about using pgtune to get recommended\n> conf for postgresql.\n\nI have seen symptoms like those described which were the result of\ntoo many dirty pages accumulating inside PostgreSQL shared_buffers.\nIt might be something else entirely in this case, but it would at\nleast be worth trying a reduced shared_buffers setting combined\nwith more aggressive bgwriter settings. I might try something like\nthe following changes, as an experiment:\n\nshared_buffers = 8GB\nbgwriter_lru_maxpages = 1000\nbgwriter_lru_multiplier = 4\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Feb 2013 04:49:47 -0800 (PST)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "Josh/Johnny,\n\nWe've been seeing a similar problem as well, and had also figured THP was\ninvolved. We found this in syslog:\nhttps://gist.github.com/davewhittaker/4723285, which led us to disable THP\n2 days ago. At first the results seemed good. In particular, our issues\nalways seemed interrupt related and the average interrupts/sec immediately\ndropped from 7k to around 3k after restarting. The good news is that we\ndidn't see any spike in system CPU time yesterday. The bad news is that we\ndid see a spike in app latency that originated from the DB, but now the\nspike is in user CPU time and seems to be spread across all of the running\npostgres processes. Interrupts still blew up to 21k/sec when it happened.\n We are still diagnosing, but I'd be curious to see if either of you get\nsimilar results from turning THP off.\n\n\nOn Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka <[email protected]> wrote:\n\n> I've been looking into something on our system that sounds similar to what\n> you're seeing. I'm still researching it, but I'm suspecting the memory\n> compaction that runs as part of transparent huge pages when memory is\n> allocated... yet to be proven. The tunable you mentioned controls the\n> compaction process that runs at allocation time so it can try to allocate\n> large pages, there's a separate one that controls if the compaction is done\n> in khugepaged, and a separate one that controls whether THP is used at all\n> or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different\n> in your distro)\n>\n> What's the output of this command?\n> egrep 'trans|thp|compact_' /proc/vmstat\n> compact_stall represents the number of processes that were stalled to do\n> a compaction, the other metrics have to do with other parts of THP. If you\n> see compact_stall climbing, from what I can tell those might be causing\n> your spikes. I haven't found a way of telling how long the processes have\n> been stalled. You could probably get a little more insight into the\n> processes with some tracing assuming you can catch it quickly enough.\n> Running perf top will also show the compaction happening but that doesn't\n> necessarily mean it's impacting your running processes.\n>\n>\n>\n>\n> On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <[email protected]> wrote:\n>\n>> # cat /sys/kernel/mm/redhat_transparent_hugepage/defrag\n>> [always] never\n>>\n>>\n>> On Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <[email protected]> wrote:\n>>\n>>> Just out of curiosity, are you using transparent huge pages?\n>>> On Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\n>>>\n>>>> Server specs:\n>>>> Dell R610\n>>>> dual E5645 hex-core 2.4GHz\n>>>> 192GB RAM\n>>>> RAID 1: 2x400GB SSD (OS + WAL logs)\n>>>> RAID 10: 4x400GB SSD (/var/lib/pgsql)\n>>>>\n>>>>\n>>>> /etc/sysctl.conf:\n>>>> kernel.msgmnb = 65536\n>>>> kernel.msgmax = 65536\n>>>> kernel.shmmax = 68719476736\n>>>> kernel.shmall = 4294967296\n>>>> vm.overcommit_memory = 0\n>>>> vm.swappiness = 0\n>>>> vm.dirty_background_bytes = 536870912\n>>>> vm.dirty_bytes = 536870912\n>>>>\n>>>>\n>>>> postgresql.conf:\n>>>> listen_addresses = '*' # what IP address(es) to listen on;\n>>>> max_connections = 150 # (change requires restart)\n>>>> shared_buffers = 48GB # min 128kB\n>>>> work_mem = 1310MB # min 64kB\n>>>> maintenance_work_mem = 24GB # min 1MB\n>>>> wal_level = hot_standby # minimal, archive, or hot_standby\n>>>> checkpoint_segments = 64 # in logfile segments, min 1, 16MB each\n>>>> checkpoint_timeout = 30min # range 30s-1h\n>>>> checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 -\n>>>> 1.0\n>>>> max_wal_senders = 5 # max number of walsender processes\n>>>> wal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\n>>>> hot_standby = on # \"on\" allows queries during recovery\n>>>> max_standby_archive_delay = 120s # max delay before canceling queries\n>>>> max_standby_streaming_delay = 120s # max delay before canceling queries\n>>>> effective_cache_size = 120GB\n>>>> constraint_exclusion = partition # on, off, or partition\n>>>> log_destination = 'syslog' # Valid values are combinations of\n>>>> logging_collector = on # Enable capturing of stderr and csvlog\n>>>> log_directory = 'pg_log' # directory where log files are written,\n>>>> log_filename = 'postgresql-%a.log' # log file name pattern,\n>>>> log_truncate_on_rotation = on # If on, an existing log file with the\n>>>> log_rotation_age = 1d # Automatic rotation of logfiles will\n>>>> log_rotation_size = 0 # Automatic rotation of logfiles will\n>>>> log_min_duration_statement = 500 # -1 is disabled, 0 logs all\n>>>> statements\n>>>> log_checkpoints = on\n>>>> log_line_prefix = 'user=%u db=%d remote=%r ' # special values:\n>>>> log_lock_waits = on # log lock waits >= deadlock_timeout\n>>>> autovacuum = on # Enable autovacuum subprocess? 'on'\n>>>> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\n>>>> autovacuum_max_workers = 5 # max number of autovacuum subprocesses\n>>>> datestyle = 'iso, mdy'\n>>>> lc_messages = 'en_US.UTF-8' # locale for system error message\n>>>> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n>>>> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n>>>> lc_time = 'en_US.UTF-8' # locale for time formatting\n>>>> default_text_search_config = 'pg_catalog.english'\n>>>> deadlock_timeout = 300ms\n>>>>\n>>>>\n>>>> per pgtune:\n>>>>\n>>>> #------------------------------------------------------------------------------\n>>>> # pgtune wizard run on 2013-02-05\n>>>> # Based on 198333224 KB RAM in the server\n>>>>\n>>>> #------------------------------------------------------------------------------\n>>>> default_statistics_target = 100\n>>>> maintenance_work_mem = 1GB\n>>>> checkpoint_completion_target = 0.9\n>>>> effective_cache_size = 128GB\n>>>> work_mem = 1152MB\n>>>> wal_buffers = 8MB\n>>>> checkpoint_segments = 16\n>>>> shared_buffers = 44GB\n>>>> max_connections = 80\n>>>>\n>>>> We use pgbouncer (set to 140 connections) in transaction pooling mode\n>>>> in front of our db.\n>>>>\n>>>>\n>>>> The problem:\n>>>>\n>>>> For the most part, the server hums along. No other applications run on\n>>>> this server other than postgres. Load averages rarely break 2.0, it never\n>>>> swaps, and %iowait is usually not more than 0.12\n>>>>\n>>>> But periodically, there are spikes in our app's db response time.\n>>>> Normally, the app's db response time hovers in the 100ms range for most of\n>>>> the day. During the spike times, it can go up to 1000ms or 1500ms, and the\n>>>> number of pg connections goes to 140 (maxed out to pgbouncer's limit, where\n>>>> normally it's only about 20-40 connections). Also, during these times,\n>>>> which usually last less than 2 minutes, we will see several thousand\n>>>> queries in the pg log (this is with log_min_duration_statement = 500),\n>>>> compared to maybe one or two dozen 500ms+ queries in non-spike times.\n>>>>\n>>>> Inbetween spikes could be an hour, two hours, sometimes half a day.\n>>>> There doesn't appear to be any pattern that we can see:\n>>>> * there are no obvious queries that are locking the db\n>>>> * it doesn't necessarily happen during high-traffic times, though it can\n>>>> * it doesn't happen during any known system, db, or app\n>>>> regularly-scheduled job, including crons\n>>>> * in general, there's no discernible regularity to it at all\n>>>> * it doesn't coincide with checkpoint starts or completions\n>>>> * it doesn't coincide with autovacuums\n>>>> * there are no messages in any system logs that might indicate any\n>>>> system or hardware-related issue\n>>>>\n>>>> Besides spikes in our graphs, the only other visible effect is that\n>>>> %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait\n>>>> and all other sar variables remain the same).\n>>>>\n>>>> And according to our monitoring system, web requests get queued up, and\n>>>> our alerting system sometimes either says there's a timeout or that it had\n>>>> multiple web response times greater than 300ms, and so we suspect (but have\n>>>> no proof) that some users will see either a long hang or possibly a\n>>>> timeout. But since it's almost always less than two minutes, and sometimes\n>>>> less than one, we don't really hear any complaints (guessing that most\n>>>> people hit reload, and things work again, so they continue on), and we\n>>>> haven't been able to see any negative effect ourselves.\n>>>>\n>>>> But we want to get in front of the problem, in case it is something\n>>>> that will get worse as traffic continues to grow. We've tweaked various\n>>>> configs on the OS side as well as the postgresql.conf side. What's posted\n>>>> above is our current setup, and the problem persists.\n>>>>\n>>>> Any ideas as to where we could even look?\n>>>>\n>>>> Also, whether related or unrelated to the spikes, are there any\n>>>> recommendations for our postgresql.conf or sysctl.conf based on our\n>>>> hardware? From pgtune's output, I am lowering maintenance_work_mem from\n>>>> 24GB down to maybe 2GB, but I keep reading conflicting things about other\n>>>> settings, such as checkpoints or max_connections.\n>>>>\n>>>> johnny\n>>>>\n>>>>\n>>\n>\n\nJosh/Johnny,We've been seeing a similar problem as well, and had also figured THP was involved. We found this in syslog: https://gist.github.com/davewhittaker/4723285, which led us to disable THP 2 days ago. At first the results seemed good. In particular, our issues always seemed interrupt related and the average interrupts/sec immediately dropped from 7k to around 3k after restarting. The good news is that we didn't see any spike in system CPU time yesterday. The bad news is that we did see a spike in app latency that originated from the DB, but now the spike is in user CPU time and seems to be spread across all of the running postgres processes. Interrupts still blew up to 21k/sec when it happened. We are still diagnosing, but I'd be curious to see if either of you get similar results from turning THP off.\nOn Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka <[email protected]> wrote:\nI've been looking into something on our system that sounds similar to what you're seeing. I'm still researching it, but I'm suspecting the memory compaction that runs as part of transparent huge pages when memory is allocated... yet to be proven. The tunable you mentioned controls the compaction process that runs at allocation time so it can try to allocate large pages, there's a separate \none that controls if the compaction is done in khugepaged, and a separate \none that controls whether THP is used at all or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different in your distro)What's the output of this command?egrep 'trans|thp|compact_' /proc/vmstat\ncompact_stall represents the number of processes that were stalled to do a compaction, the other metrics have to do with other parts of THP. If you see compact_stall climbing, from what I can tell those might be causing your spikes. I haven't found a way of telling how long the processes have been stalled. You could probably get a little more insight into the processes with some tracing assuming you can catch it quickly enough. Running perf top will also show the compaction happening but that doesn't necessarily mean it's impacting your running processes.\nOn Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <[email protected]> wrote:\n# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag [always] never\nOn Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <[email protected]> wrote:\nJust out of curiosity, are you using transparent huge pages?\nOn Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\nServer specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)\n/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296\nvm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:\nlisten_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)\nshared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MB\nwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\nmax_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\nhot_standby = on # \"on\" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queries\nmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partition\nlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,\nlog_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\nlog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeout\nautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\nautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300ms\nper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server\n#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\n\n\n\n\n\n\neffective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.\nThe problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12\nBut periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.\nInbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db\n* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons\n* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issue\nBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).\nAnd according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.\nBut we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.\nAny ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.\njohnny",
"msg_date": "Wed, 6 Feb 2013 10:42:11 -0500",
"msg_from": "David Whittaker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "David,\n\nInteresting observations. I had not been tracking the interrupts but\nperhaps I should take a look. How are you measuring them over a period of\ntime, or are you just getting them real time?\n\nDid you turn off THP all together or just the THP defrag?\n\n\nOn Wed, Feb 6, 2013 at 10:42 AM, David Whittaker <[email protected]> wrote:\n\n> Josh/Johnny,\n>\n> We've been seeing a similar problem as well, and had also figured THP was\n> involved. We found this in syslog:\n> https://gist.github.com/davewhittaker/4723285, which led us to disable\n> THP 2 days ago. At first the results seemed good. In particular, our\n> issues always seemed interrupt related and the average interrupts/sec\n> immediately dropped from 7k to around 3k after restarting. The good news\n> is that we didn't see any spike in system CPU time yesterday. The bad news\n> is that we did see a spike in app latency that originated from the DB, but\n> now the spike is in user CPU time and seems to be spread across all of the\n> running postgres processes. Interrupts still blew up to 21k/sec when it\n> happened. We are still diagnosing, but I'd be curious to see if either of\n> you get similar results from turning THP off.\n>\n>\n> On Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka <[email protected]> wrote:\n>\n>> I've been looking into something on our system that sounds similar to\n>> what you're seeing. I'm still researching it, but I'm suspecting the\n>> memory compaction that runs as part of transparent huge pages when memory\n>> is allocated... yet to be proven. The tunable you mentioned controls the\n>> compaction process that runs at allocation time so it can try to allocate\n>> large pages, there's a separate one that controls if the compaction is done\n>> in khugepaged, and a separate one that controls whether THP is used at all\n>> or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different\n>> in your distro)\n>>\n>> What's the output of this command?\n>> egrep 'trans|thp|compact_' /proc/vmstat\n>> compact_stall represents the number of processes that were stalled to do\n>> a compaction, the other metrics have to do with other parts of THP. If you\n>> see compact_stall climbing, from what I can tell those might be causing\n>> your spikes. I haven't found a way of telling how long the processes have\n>> been stalled. You could probably get a little more insight into the\n>> processes with some tracing assuming you can catch it quickly enough.\n>> Running perf top will also show the compaction happening but that doesn't\n>> necessarily mean it's impacting your running processes.\n>>\n>>\n>>\n>>\n>> On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <[email protected]> wrote:\n>>\n>>> # cat /sys/kernel/mm/redhat_transparent_hugepage/defrag\n>>> [always] never\n>>>\n>>>\n>>> On Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <[email protected]> wrote:\n>>>\n>>>> Just out of curiosity, are you using transparent huge pages?\n>>>> On Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\n>>>>\n>>>>> Server specs:\n>>>>> Dell R610\n>>>>> dual E5645 hex-core 2.4GHz\n>>>>> 192GB RAM\n>>>>> RAID 1: 2x400GB SSD (OS + WAL logs)\n>>>>> RAID 10: 4x400GB SSD (/var/lib/pgsql)\n>>>>>\n>>>>>\n>>>>> /etc/sysctl.conf:\n>>>>> kernel.msgmnb = 65536\n>>>>> kernel.msgmax = 65536\n>>>>> kernel.shmmax = 68719476736\n>>>>> kernel.shmall = 4294967296\n>>>>> vm.overcommit_memory = 0\n>>>>> vm.swappiness = 0\n>>>>> vm.dirty_background_bytes = 536870912\n>>>>> vm.dirty_bytes = 536870912\n>>>>>\n>>>>>\n>>>>> postgresql.conf:\n>>>>> listen_addresses = '*' # what IP address(es) to listen on;\n>>>>> max_connections = 150 # (change requires restart)\n>>>>> shared_buffers = 48GB # min 128kB\n>>>>> work_mem = 1310MB # min 64kB\n>>>>> maintenance_work_mem = 24GB # min 1MB\n>>>>> wal_level = hot_standby # minimal, archive, or hot_standby\n>>>>> checkpoint_segments = 64 # in logfile segments, min 1, 16MB each\n>>>>> checkpoint_timeout = 30min # range 30s-1h\n>>>>> checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0\n>>>>> - 1.0\n>>>>> max_wal_senders = 5 # max number of walsender processes\n>>>>> wal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\n>>>>> hot_standby = on # \"on\" allows queries during recovery\n>>>>> max_standby_archive_delay = 120s # max delay before canceling queries\n>>>>> max_standby_streaming_delay = 120s # max delay before canceling\n>>>>> queries\n>>>>> effective_cache_size = 120GB\n>>>>> constraint_exclusion = partition # on, off, or partition\n>>>>> log_destination = 'syslog' # Valid values are combinations of\n>>>>> logging_collector = on # Enable capturing of stderr and csvlog\n>>>>> log_directory = 'pg_log' # directory where log files are written,\n>>>>> log_filename = 'postgresql-%a.log' # log file name pattern,\n>>>>> log_truncate_on_rotation = on # If on, an existing log file with the\n>>>>> log_rotation_age = 1d # Automatic rotation of logfiles will\n>>>>> log_rotation_size = 0 # Automatic rotation of logfiles will\n>>>>> log_min_duration_statement = 500 # -1 is disabled, 0 logs all\n>>>>> statements\n>>>>> log_checkpoints = on\n>>>>> log_line_prefix = 'user=%u db=%d remote=%r ' # special values:\n>>>>> log_lock_waits = on # log lock waits >= deadlock_timeout\n>>>>> autovacuum = on # Enable autovacuum subprocess? 'on'\n>>>>> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\n>>>>> autovacuum_max_workers = 5 # max number of autovacuum subprocesses\n>>>>> datestyle = 'iso, mdy'\n>>>>> lc_messages = 'en_US.UTF-8' # locale for system error message\n>>>>> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n>>>>> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n>>>>> lc_time = 'en_US.UTF-8' # locale for time formatting\n>>>>> default_text_search_config = 'pg_catalog.english'\n>>>>> deadlock_timeout = 300ms\n>>>>>\n>>>>>\n>>>>> per pgtune:\n>>>>>\n>>>>> #------------------------------------------------------------------------------\n>>>>> # pgtune wizard run on 2013-02-05\n>>>>> # Based on 198333224 KB RAM in the server\n>>>>>\n>>>>> #------------------------------------------------------------------------------\n>>>>> default_statistics_target = 100\n>>>>> maintenance_work_mem = 1GB\n>>>>> checkpoint_completion_target = 0.9\n>>>>> effective_cache_size = 128GB\n>>>>> work_mem = 1152MB\n>>>>> wal_buffers = 8MB\n>>>>> checkpoint_segments = 16\n>>>>> shared_buffers = 44GB\n>>>>> max_connections = 80\n>>>>>\n>>>>> We use pgbouncer (set to 140 connections) in transaction pooling mode\n>>>>> in front of our db.\n>>>>>\n>>>>>\n>>>>> The problem:\n>>>>>\n>>>>> For the most part, the server hums along. No other applications run on\n>>>>> this server other than postgres. Load averages rarely break 2.0, it never\n>>>>> swaps, and %iowait is usually not more than 0.12\n>>>>>\n>>>>> But periodically, there are spikes in our app's db response time.\n>>>>> Normally, the app's db response time hovers in the 100ms range for most of\n>>>>> the day. During the spike times, it can go up to 1000ms or 1500ms, and the\n>>>>> number of pg connections goes to 140 (maxed out to pgbouncer's limit, where\n>>>>> normally it's only about 20-40 connections). Also, during these times,\n>>>>> which usually last less than 2 minutes, we will see several thousand\n>>>>> queries in the pg log (this is with log_min_duration_statement = 500),\n>>>>> compared to maybe one or two dozen 500ms+ queries in non-spike times.\n>>>>>\n>>>>> Inbetween spikes could be an hour, two hours, sometimes half a day.\n>>>>> There doesn't appear to be any pattern that we can see:\n>>>>> * there are no obvious queries that are locking the db\n>>>>> * it doesn't necessarily happen during high-traffic times, though it\n>>>>> can\n>>>>> * it doesn't happen during any known system, db, or app\n>>>>> regularly-scheduled job, including crons\n>>>>> * in general, there's no discernible regularity to it at all\n>>>>> * it doesn't coincide with checkpoint starts or completions\n>>>>> * it doesn't coincide with autovacuums\n>>>>> * there are no messages in any system logs that might indicate any\n>>>>> system or hardware-related issue\n>>>>>\n>>>>> Besides spikes in our graphs, the only other visible effect is that\n>>>>> %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait\n>>>>> and all other sar variables remain the same).\n>>>>>\n>>>>> And according to our monitoring system, web requests get queued up,\n>>>>> and our alerting system sometimes either says there's a timeout or that it\n>>>>> had multiple web response times greater than 300ms, and so we suspect (but\n>>>>> have no proof) that some users will see either a long hang or possibly a\n>>>>> timeout. But since it's almost always less than two minutes, and sometimes\n>>>>> less than one, we don't really hear any complaints (guessing that most\n>>>>> people hit reload, and things work again, so they continue on), and we\n>>>>> haven't been able to see any negative effect ourselves.\n>>>>>\n>>>>> But we want to get in front of the problem, in case it is something\n>>>>> that will get worse as traffic continues to grow. We've tweaked various\n>>>>> configs on the OS side as well as the postgresql.conf side. What's posted\n>>>>> above is our current setup, and the problem persists.\n>>>>>\n>>>>> Any ideas as to where we could even look?\n>>>>>\n>>>>> Also, whether related or unrelated to the spikes, are there any\n>>>>> recommendations for our postgresql.conf or sysctl.conf based on our\n>>>>> hardware? From pgtune's output, I am lowering maintenance_work_mem from\n>>>>> 24GB down to maybe 2GB, but I keep reading conflicting things about other\n>>>>> settings, such as checkpoints or max_connections.\n>>>>>\n>>>>> johnny\n>>>>>\n>>>>>\n>>>\n>>\n>\n\nDavid,Interesting observations. I had not been tracking the interrupts but perhaps I should take a look. How are you measuring them over a period of time, or are you just getting them real time? \nDid you turn off THP all together or just the THP defrag?On Wed, Feb 6, 2013 at 10:42 AM, David Whittaker <[email protected]> wrote:\nJosh/Johnny,We've been seeing a similar problem as well, and had also figured THP was involved. We found this in syslog: https://gist.github.com/davewhittaker/4723285, which led us to disable THP 2 days ago. At first the results seemed good. In particular, our issues always seemed interrupt related and the average interrupts/sec immediately dropped from 7k to around 3k after restarting. The good news is that we didn't see any spike in system CPU time yesterday. The bad news is that we did see a spike in app latency that originated from the DB, but now the spike is in user CPU time and seems to be spread across all of the running postgres processes. Interrupts still blew up to 21k/sec when it happened. We are still diagnosing, but I'd be curious to see if either of you get similar results from turning THP off.\nOn Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka <[email protected]> wrote:\nI've been looking into something on our system that sounds similar to what you're seeing. I'm still researching it, but I'm suspecting the memory compaction that runs as part of transparent huge pages when memory is allocated... yet to be proven. The tunable you mentioned controls the compaction process that runs at allocation time so it can try to allocate large pages, there's a separate \none that controls if the compaction is done in khugepaged, and a separate \none that controls whether THP is used at all or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different in your distro)What's the output of this command?egrep 'trans|thp|compact_' /proc/vmstat\ncompact_stall represents the number of processes that were stalled to do a compaction, the other metrics have to do with other parts of THP. If you see compact_stall climbing, from what I can tell those might be causing your spikes. I haven't found a way of telling how long the processes have been stalled. You could probably get a little more insight into the processes with some tracing assuming you can catch it quickly enough. Running perf top will also show the compaction happening but that doesn't necessarily mean it's impacting your running processes.\nOn Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <[email protected]> wrote:\n# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag [always] never\nOn Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <[email protected]> wrote:\nJust out of curiosity, are you using transparent huge pages?\nOn Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\nServer specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)\n/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296\nvm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:\nlisten_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)\nshared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MB\nwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\nmax_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\nhot_standby = on # \"on\" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queries\nmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partition\nlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,\nlog_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\nlog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeout\nautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\nautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300ms\nper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server\n#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\n\n\n\n\n\n\n\n\neffective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.\nThe problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12\nBut periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.\nInbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db\n* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons\n* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issue\nBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).\nAnd according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.\nBut we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.\nAny ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.\njohnny",
"msg_date": "Wed, 6 Feb 2013 13:20:14 -0500",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "Hi Josh,\n\nOn Wed, Feb 6, 2013 at 1:20 PM, Josh Krupka <[email protected]> wrote:\n\n> David,\n>\n> Interesting observations. I had not been tracking the interrupts but\n> perhaps I should take a look. How are you measuring them over a period of\n> time, or are you just getting them real time?\n>\n\nI initially saw it happen with vmstat, but now I'm collecting them every 5\nminutes over SNMP with Cacti.\n\n\n> Did you turn off THP all together or just the THP defrag?\n>\n\nWe disabled THP all together, with the thought that we might re-enable\nwithout defrag if we got positive results. At this point I don't think THP\nis the root cause though, so I'm curious to see if anyone else gets\npositive results from disabling it. We definitely haven't seen any\nperformance hit from turning it off.\n\n\n> On Wed, Feb 6, 2013 at 10:42 AM, David Whittaker <[email protected]> wrote:\n>\n>> Josh/Johnny,\n>>\n>> We've been seeing a similar problem as well, and had also figured THP was\n>> involved. We found this in syslog:\n>> https://gist.github.com/davewhittaker/4723285, which led us to disable\n>> THP 2 days ago. At first the results seemed good. In particular, our\n>> issues always seemed interrupt related and the average interrupts/sec\n>> immediately dropped from 7k to around 3k after restarting. The good news\n>> is that we didn't see any spike in system CPU time yesterday. The bad news\n>> is that we did see a spike in app latency that originated from the DB, but\n>> now the spike is in user CPU time and seems to be spread across all of the\n>> running postgres processes. Interrupts still blew up to 21k/sec when it\n>> happened. We are still diagnosing, but I'd be curious to see if either of\n>> you get similar results from turning THP off.\n>>\n>>\n>> On Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka <[email protected]> wrote:\n>>\n>>> I've been looking into something on our system that sounds similar to\n>>> what you're seeing. I'm still researching it, but I'm suspecting the\n>>> memory compaction that runs as part of transparent huge pages when memory\n>>> is allocated... yet to be proven. The tunable you mentioned controls the\n>>> compaction process that runs at allocation time so it can try to allocate\n>>> large pages, there's a separate one that controls if the compaction is done\n>>> in khugepaged, and a separate one that controls whether THP is used at all\n>>> or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different\n>>> in your distro)\n>>>\n>>> What's the output of this command?\n>>> egrep 'trans|thp|compact_' /proc/vmstat\n>>> compact_stall represents the number of processes that were stalled to\n>>> do a compaction, the other metrics have to do with other parts of THP. If\n>>> you see compact_stall climbing, from what I can tell those might be\n>>> causing your spikes. I haven't found a way of telling how long the\n>>> processes have been stalled. You could probably get a little more insight\n>>> into the processes with some tracing assuming you can catch it quickly\n>>> enough. Running perf top will also show the compaction happening but that\n>>> doesn't necessarily mean it's impacting your running processes.\n>>>\n>>>\n>>>\n>>>\n>>> On Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <[email protected]> wrote:\n>>>\n>>>> # cat /sys/kernel/mm/redhat_transparent_hugepage/defrag\n>>>> [always] never\n>>>>\n>>>>\n>>>> On Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <[email protected]> wrote:\n>>>>\n>>>>> Just out of curiosity, are you using transparent huge pages?\n>>>>> On Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\n>>>>>\n>>>>>> Server specs:\n>>>>>> Dell R610\n>>>>>> dual E5645 hex-core 2.4GHz\n>>>>>> 192GB RAM\n>>>>>> RAID 1: 2x400GB SSD (OS + WAL logs)\n>>>>>> RAID 10: 4x400GB SSD (/var/lib/pgsql)\n>>>>>>\n>>>>>>\n>>>>>> /etc/sysctl.conf:\n>>>>>> kernel.msgmnb = 65536\n>>>>>> kernel.msgmax = 65536\n>>>>>> kernel.shmmax = 68719476736\n>>>>>> kernel.shmall = 4294967296\n>>>>>> vm.overcommit_memory = 0\n>>>>>> vm.swappiness = 0\n>>>>>> vm.dirty_background_bytes = 536870912\n>>>>>> vm.dirty_bytes = 536870912\n>>>>>>\n>>>>>>\n>>>>>> postgresql.conf:\n>>>>>> listen_addresses = '*' # what IP address(es) to listen on;\n>>>>>> max_connections = 150 # (change requires restart)\n>>>>>> shared_buffers = 48GB # min 128kB\n>>>>>> work_mem = 1310MB # min 64kB\n>>>>>> maintenance_work_mem = 24GB # min 1MB\n>>>>>> wal_level = hot_standby # minimal, archive, or hot_standby\n>>>>>> checkpoint_segments = 64 # in logfile segments, min 1, 16MB each\n>>>>>> checkpoint_timeout = 30min # range 30s-1h\n>>>>>> checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0\n>>>>>> - 1.0\n>>>>>> max_wal_senders = 5 # max number of walsender processes\n>>>>>> wal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\n>>>>>> hot_standby = on # \"on\" allows queries during recovery\n>>>>>> max_standby_archive_delay = 120s # max delay before canceling queries\n>>>>>> max_standby_streaming_delay = 120s # max delay before canceling\n>>>>>> queries\n>>>>>> effective_cache_size = 120GB\n>>>>>> constraint_exclusion = partition # on, off, or partition\n>>>>>> log_destination = 'syslog' # Valid values are combinations of\n>>>>>> logging_collector = on # Enable capturing of stderr and csvlog\n>>>>>> log_directory = 'pg_log' # directory where log files are written,\n>>>>>> log_filename = 'postgresql-%a.log' # log file name pattern,\n>>>>>> log_truncate_on_rotation = on # If on, an existing log file with the\n>>>>>> log_rotation_age = 1d # Automatic rotation of logfiles will\n>>>>>> log_rotation_size = 0 # Automatic rotation of logfiles will\n>>>>>> log_min_duration_statement = 500 # -1 is disabled, 0 logs all\n>>>>>> statements\n>>>>>> log_checkpoints = on\n>>>>>> log_line_prefix = 'user=%u db=%d remote=%r ' # special values:\n>>>>>> log_lock_waits = on # log lock waits >= deadlock_timeout\n>>>>>> autovacuum = on # Enable autovacuum subprocess? 'on'\n>>>>>> log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\n>>>>>> autovacuum_max_workers = 5 # max number of autovacuum subprocesses\n>>>>>> datestyle = 'iso, mdy'\n>>>>>> lc_messages = 'en_US.UTF-8' # locale for system error message\n>>>>>> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n>>>>>> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n>>>>>> lc_time = 'en_US.UTF-8' # locale for time formatting\n>>>>>> default_text_search_config = 'pg_catalog.english'\n>>>>>> deadlock_timeout = 300ms\n>>>>>>\n>>>>>>\n>>>>>> per pgtune:\n>>>>>>\n>>>>>> #------------------------------------------------------------------------------\n>>>>>> # pgtune wizard run on 2013-02-05\n>>>>>> # Based on 198333224 KB RAM in the server\n>>>>>>\n>>>>>> #------------------------------------------------------------------------------\n>>>>>> default_statistics_target = 100\n>>>>>> maintenance_work_mem = 1GB\n>>>>>> checkpoint_completion_target = 0.9\n>>>>>> effective_cache_size = 128GB\n>>>>>> work_mem = 1152MB\n>>>>>> wal_buffers = 8MB\n>>>>>> checkpoint_segments = 16\n>>>>>> shared_buffers = 44GB\n>>>>>> max_connections = 80\n>>>>>>\n>>>>>> We use pgbouncer (set to 140 connections) in transaction pooling mode\n>>>>>> in front of our db.\n>>>>>>\n>>>>>>\n>>>>>> The problem:\n>>>>>>\n>>>>>> For the most part, the server hums along. No other applications run\n>>>>>> on this server other than postgres. Load averages rarely break 2.0, it\n>>>>>> never swaps, and %iowait is usually not more than 0.12\n>>>>>>\n>>>>>> But periodically, there are spikes in our app's db response time.\n>>>>>> Normally, the app's db response time hovers in the 100ms range for most of\n>>>>>> the day. During the spike times, it can go up to 1000ms or 1500ms, and the\n>>>>>> number of pg connections goes to 140 (maxed out to pgbouncer's limit, where\n>>>>>> normally it's only about 20-40 connections). Also, during these times,\n>>>>>> which usually last less than 2 minutes, we will see several thousand\n>>>>>> queries in the pg log (this is with log_min_duration_statement = 500),\n>>>>>> compared to maybe one or two dozen 500ms+ queries in non-spike times.\n>>>>>>\n>>>>>> Inbetween spikes could be an hour, two hours, sometimes half a day.\n>>>>>> There doesn't appear to be any pattern that we can see:\n>>>>>> * there are no obvious queries that are locking the db\n>>>>>> * it doesn't necessarily happen during high-traffic times, though it\n>>>>>> can\n>>>>>> * it doesn't happen during any known system, db, or app\n>>>>>> regularly-scheduled job, including crons\n>>>>>> * in general, there's no discernible regularity to it at all\n>>>>>> * it doesn't coincide with checkpoint starts or completions\n>>>>>> * it doesn't coincide with autovacuums\n>>>>>> * there are no messages in any system logs that might indicate any\n>>>>>> system or hardware-related issue\n>>>>>>\n>>>>>> Besides spikes in our graphs, the only other visible effect is that\n>>>>>> %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait\n>>>>>> and all other sar variables remain the same).\n>>>>>>\n>>>>>> And according to our monitoring system, web requests get queued up,\n>>>>>> and our alerting system sometimes either says there's a timeout or that it\n>>>>>> had multiple web response times greater than 300ms, and so we suspect (but\n>>>>>> have no proof) that some users will see either a long hang or possibly a\n>>>>>> timeout. But since it's almost always less than two minutes, and sometimes\n>>>>>> less than one, we don't really hear any complaints (guessing that most\n>>>>>> people hit reload, and things work again, so they continue on), and we\n>>>>>> haven't been able to see any negative effect ourselves.\n>>>>>>\n>>>>>> But we want to get in front of the problem, in case it is something\n>>>>>> that will get worse as traffic continues to grow. We've tweaked various\n>>>>>> configs on the OS side as well as the postgresql.conf side. What's posted\n>>>>>> above is our current setup, and the problem persists.\n>>>>>>\n>>>>>> Any ideas as to where we could even look?\n>>>>>>\n>>>>>> Also, whether related or unrelated to the spikes, are there any\n>>>>>> recommendations for our postgresql.conf or sysctl.conf based on our\n>>>>>> hardware? From pgtune's output, I am lowering maintenance_work_mem from\n>>>>>> 24GB down to maybe 2GB, but I keep reading conflicting things about other\n>>>>>> settings, such as checkpoints or max_connections.\n>>>>>>\n>>>>>> johnny\n>>>>>>\n>>>>>>\n>>>>\n>>>\n>>\n>\n\nHi Josh,On Wed, Feb 6, 2013 at 1:20 PM, Josh Krupka <[email protected]> wrote:\nDavid,Interesting observations. I had not been tracking the interrupts but perhaps I should take a look. How are you measuring them over a period of time, or are you just getting them real time? \nI initially saw it happen with vmstat, but now I'm collecting them every 5 minutes over SNMP with Cacti. \nDid you turn off THP all together or just the THP defrag?We disabled THP all together, with the thought that we might re-enable without defrag if we got positive results. At this point I don't think THP is the root cause though, so I'm curious to see if anyone else gets positive results from disabling it. We definitely haven't seen any performance hit from turning it off.\n On Wed, Feb 6, 2013 at 10:42 AM, David Whittaker <[email protected]> wrote:\nJosh/Johnny,We've been seeing a similar problem as well, and had also figured THP was involved. We found this in syslog: https://gist.github.com/davewhittaker/4723285, which led us to disable THP 2 days ago. At first the results seemed good. In particular, our issues always seemed interrupt related and the average interrupts/sec immediately dropped from 7k to around 3k after restarting. The good news is that we didn't see any spike in system CPU time yesterday. The bad news is that we did see a spike in app latency that originated from the DB, but now the spike is in user CPU time and seems to be spread across all of the running postgres processes. Interrupts still blew up to 21k/sec when it happened. We are still diagnosing, but I'd be curious to see if either of you get similar results from turning THP off.\nOn Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka <[email protected]> wrote:\nI've been looking into something on our system that sounds similar to what you're seeing. I'm still researching it, but I'm suspecting the memory compaction that runs as part of transparent huge pages when memory is allocated... yet to be proven. The tunable you mentioned controls the compaction process that runs at allocation time so it can try to allocate large pages, there's a separate \none that controls if the compaction is done in khugepaged, and a separate \none that controls whether THP is used at all or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different in your distro)What's the output of this command?egrep 'trans|thp|compact_' /proc/vmstat\ncompact_stall represents the number of processes that were stalled to do a compaction, the other metrics have to do with other parts of THP. If you see compact_stall climbing, from what I can tell those might be causing your spikes. I haven't found a way of telling how long the processes have been stalled. You could probably get a little more insight into the processes with some tracing assuming you can catch it quickly enough. Running perf top will also show the compaction happening but that doesn't necessarily mean it's impacting your running processes.\nOn Tue, Feb 5, 2013 at 6:46 PM, Johnny Tan <[email protected]> wrote:\n# cat /sys/kernel/mm/redhat_transparent_hugepage/defrag [always] never\nOn Tue, Feb 5, 2013 at 5:37 PM, Josh Krupka <[email protected]> wrote:\nJust out of curiosity, are you using transparent huge pages?\nOn Feb 5, 2013 5:03 PM, \"Johnny Tan\" <[email protected]> wrote:\nServer specs:Dell R610dual E5645 hex-core 2.4GHz192GB RAMRAID 1: 2x400GB SSD (OS + WAL logs)RAID 10: 4x400GB SSD (/var/lib/pgsql)\n/etc/sysctl.conf:kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296\nvm.overcommit_memory = 0vm.swappiness = 0vm.dirty_background_bytes = 536870912vm.dirty_bytes = 536870912postgresql.conf:\nlisten_addresses = '*' # what IP address(es) to listen on;max_connections = 150 # (change requires restart)\nshared_buffers = 48GB # min 128kBwork_mem = 1310MB # min 64kBmaintenance_work_mem = 24GB # min 1MB\nwal_level = hot_standby # minimal, archive, or hot_standbycheckpoint_segments = 64 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30min # range 30s-1hcheckpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\nmax_wal_senders = 5 # max number of walsender processeswal_keep_segments = 2000 # in logfile segments, 16MB each; 0 disables\nhot_standby = on # \"on\" allows queries during recoverymax_standby_archive_delay = 120s # max delay before canceling queries\nmax_standby_streaming_delay = 120s # max delay before canceling querieseffective_cache_size = 120GBconstraint_exclusion = partition # on, off, or partition\nlog_destination = 'syslog' # Valid values are combinations oflogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' # directory where log files are written,log_filename = 'postgresql-%a.log' # log file name pattern,\nlog_truncate_on_rotation = on # If on, an existing log file with thelog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_rotation_size = 0 # Automatic rotation of logfiles willlog_min_duration_statement = 500 # -1 is disabled, 0 logs all statements\nlog_checkpoints = onlog_line_prefix = 'user=%u db=%d remote=%r ' # special values:log_lock_waits = on # log lock waits >= deadlock_timeout\nautovacuum = on # Enable autovacuum subprocess? 'on'log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and\nautovacuum_max_workers = 5 # max number of autovacuum subprocessesdatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formattinglc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'deadlock_timeout = 300ms\nper pgtune:#------------------------------------------------------------------------------# pgtune wizard run on 2013-02-05# Based on 198333224 KB RAM in the server\n#------------------------------------------------------------------------------default_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\n\n\n\n\n\n\n\n\n\n\neffective_cache_size = 128GBwork_mem = 1152MBwal_buffers = 8MBcheckpoint_segments = 16shared_buffers = 44GBmax_connections = 80We use pgbouncer (set to 140 connections) in transaction pooling mode in front of our db.\nThe problem:For the most part, the server hums along. No other applications run on this server other than postgres. Load averages rarely break 2.0, it never swaps, and %iowait is usually not more than 0.12\nBut periodically, there are spikes in our app's db response time. Normally, the app's db response time hovers in the 100ms range for most of the day. During the spike times, it can go up to 1000ms or 1500ms, and the number of pg connections goes to 140 (maxed out to pgbouncer's limit, where normally it's only about 20-40 connections). Also, during these times, which usually last less than 2 minutes, we will see several thousand queries in the pg log (this is with log_min_duration_statement = 500), compared to maybe one or two dozen 500ms+ queries in non-spike times.\nInbetween spikes could be an hour, two hours, sometimes half a day. There doesn't appear to be any pattern that we can see:* there are no obvious queries that are locking the db\n* it doesn't necessarily happen during high-traffic times, though it can* it doesn't happen during any known system, db, or app regularly-scheduled job, including crons\n* in general, there's no discernible regularity to it at all* it doesn't coincide with checkpoint starts or completions* it doesn't coincide with autovacuums* there are no messages in any system logs that might indicate any system or hardware-related issue\nBesides spikes in our graphs, the only other visible effect is that %system in sar goes from average of 0.7 to as high as 10.0 or so (%iowait and all other sar variables remain the same).\nAnd according to our monitoring system, web requests get queued up, and our alerting system sometimes either says there's a timeout or that it had multiple web response times greater than 300ms, and so we suspect (but have no proof) that some users will see either a long hang or possibly a timeout. But since it's almost always less than two minutes, and sometimes less than one, we don't really hear any complaints (guessing that most people hit reload, and things work again, so they continue on), and we haven't been able to see any negative effect ourselves.\nBut we want to get in front of the problem, in case it is something that will get worse as traffic continues to grow. We've tweaked various configs on the OS side as well as the postgresql.conf side. What's posted above is our current setup, and the problem persists.\nAny ideas as to where we could even look?Also, whether related or unrelated to the spikes, are there any recommendations for our postgresql.conf or sysctl.conf based on our hardware? From pgtune's output, I am lowering maintenance_work_mem from 24GB down to maybe 2GB, but I keep reading conflicting things about other settings, such as checkpoints or max_connections.\njohnny",
"msg_date": "Wed, 6 Feb 2013 14:13:56 -0500",
"msg_from": "David Whittaker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Wed, Feb 6, 2013 at 7:49 AM, Kevin Grittner <[email protected]> wrote:\n\n> \"[email protected]\" <[email protected]> wrote:\n> > Johnny Tan <[email protected]> wrote:\n>\n> >>shared_buffers = 48GB# min 128kB\n>\n> > From the postgresql.conf, I can see that the shared_buffers is\n> > set to 48GB which is not small, it would be possible that the\n> > large buffer cache could be \"dirty\", when a checkpoint starts, it\n> > would cause a checkpoint I/O spike.\n> >\n> >\n> > I would like to suggest you about using pgtune to get recommended\n> > conf for postgresql.\n>\n> I have seen symptoms like those described which were the result of\n> too many dirty pages accumulating inside PostgreSQL shared_buffers.\n> It might be something else entirely in this case, but it would at\n> least be worth trying a reduced shared_buffers setting combined\n> with more aggressive bgwriter settings. I might try something like\n> the following changes, as an experiment:\n>\n> shared_buffers = 8GB\n> bgwriter_lru_maxpages = 1000\n> bgwriter_lru_multiplier = 4\n>\n\nThanks Kevin. Wouldn't this be controlled by our checkpoint settings,\nthough?\n\nOn Wed, Feb 6, 2013 at 7:49 AM, Kevin Grittner <[email protected]> wrote:\n\"[email protected]\" <[email protected]> wrote:\n\n> Johnny Tan <[email protected]> wrote:\n\n>>shared_buffers = 48GB# min 128kB\n\n> From the postgresql.conf, I can see that the shared_buffers is\n> set to 48GB which is not small, it would be possible that the\n> large buffer cache could be \"dirty\", when a checkpoint starts, it\n> would cause a checkpoint I/O spike.\n>\n>\n> I would like to suggest you about using pgtune to get recommended\n> conf for postgresql.\n\nI have seen symptoms like those described which were the result of\ntoo many dirty pages accumulating inside PostgreSQL shared_buffers.\nIt might be something else entirely in this case, but it would at\nleast be worth trying a reduced shared_buffers setting combined\nwith more aggressive bgwriter settings. I might try something like\nthe following changes, as an experiment:\n\nshared_buffers = 8GB\nbgwriter_lru_maxpages = 1000\nbgwriter_lru_multiplier = 4Thanks Kevin. Wouldn't this be controlled by our checkpoint settings, though?",
"msg_date": "Wed, 6 Feb 2013 14:33:45 -0500",
"msg_from": "Johnny Tan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Wed, Feb 6, 2013 at 2:13 PM, David Whittaker <[email protected]> wrote:\n\n> We disabled THP all together, with the thought that we might re-enable\n> without defrag if we got positive results. At this point I don't think THP\n> is the root cause though, so I'm curious to see if anyone else gets\n> positive results from disabling it. We definitely haven't seen any\n> performance hit from turning it off.\n>\n\nWe are considering disabling THP, although we aren't seeing the errors in\nsyslog that David mentioned.\n\nJosh: What made you think of THP? I'm curious if there's a discussion you\ncan point me to. Since you mentioned it, I've been looking more into it,\nand there isn't too much. In fact, this post makes it sound like enabling\nit fixes a similar problem to what we're seeing -- i.e., %system shoots up\nduring the spikes:\nhttp://www.pythian.com/blog/performance-tuning-hugepages-in-linux/\n\njohnny\n\nOn Wed, Feb 6, 2013 at 2:13 PM, David Whittaker <[email protected]> wrote:\n\nWe disabled THP all together, with the thought that we might re-enable without defrag if we got positive results. At this point I don't think THP is the root cause though, so I'm curious to see if anyone else gets positive results from disabling it. We definitely haven't seen any performance hit from turning it off.\nWe are considering disabling THP, although we aren't seeing the errors in syslog that David mentioned.Josh: What made you think of THP? I'm curious if there's a discussion you can point me to. Since you mentioned it, I've been looking more into it, and there isn't too much. In fact, this post makes it sound like enabling it fixes a similar problem to what we're seeing -- i.e., %system shoots up during the spikes:\nhttp://www.pythian.com/blog/performance-tuning-hugepages-in-linux/johnny",
"msg_date": "Wed, 6 Feb 2013 14:45:38 -0500",
"msg_from": "Johnny Tan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka <[email protected]> wrote:\n\n> I've been looking into something on our system that sounds similar to what\n> you're seeing. I'm still researching it, but I'm suspecting the memory\n> compaction that runs as part of transparent huge pages when memory is\n> allocated... yet to be proven. The tunable you mentioned controls the\n> compaction process that runs at allocation time so it can try to allocate\n> large pages, there's a separate one that controls if the compaction is done\n> in khugepaged, and a separate one that controls whether THP is used at all\n> or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different\n> in your distro)\n>\n\nBTW, I sent /defrag yesterday, but /enabled had the same output.\n\n\n> What's the output of this command?\n> egrep 'trans|thp|compact_' /proc/vmstat\n> compact_stall represents the number of processes that were stalled to do\n> a compaction, the other metrics have to do with other parts of THP. If you\n> see compact_stall climbing, from what I can tell those might be causing\n> your spikes. I haven't found a way of telling how long the processes have\n> been stalled. You could probably get a little more insight into the\n> processes with some tracing assuming you can catch it quickly enough.\n> Running perf top will also show the compaction happening but that doesn't\n> necessarily mean it's impacting your running processes.\n>\n\nInteresting:\n\n# egrep 'trans|thp|compact_' /proc/vmstat\nnr_anon_transparent_hugepages 643\ncompact_blocks_moved 22629094\ncompact_pages_moved 532129382\ncompact_pagemigrate_failed 0\ncompact_stall 398051\ncompact_fail 80453\ncompact_success 317598\nthp_fault_alloc 8254106\nthp_fault_fallback 167286\nthp_collapse_alloc 622783\nthp_collapse_alloc_failed 3321\nthp_split 122833\n\nOn Tue, Feb 5, 2013 at 11:23 PM, Josh Krupka <[email protected]> wrote:\nI've been looking into something on our system that sounds similar to what you're seeing. I'm still researching it, but I'm suspecting the memory compaction that runs as part of transparent huge pages when memory is allocated... yet to be proven. The tunable you mentioned controls the compaction process that runs at allocation time so it can try to allocate large pages, there's a separate \none that controls if the compaction is done in khugepaged, and a separate \none that controls whether THP is used at all or not (/sys/kernel/mm/transparent_hugepage/enabled, or perhaps different in your distro)BTW, I sent /defrag yesterday, but /enabled had the same output.\n What's the output of this command?\negrep 'trans|thp|compact_' /proc/vmstat\ncompact_stall represents the number of processes that were stalled to do a compaction, the other metrics have to do with other parts of THP. If you see compact_stall climbing, from what I can tell those might be causing your spikes. I haven't found a way of telling how long the processes have been stalled. You could probably get a little more insight into the processes with some tracing assuming you can catch it quickly enough. Running perf top will also show the compaction happening but that doesn't necessarily mean it's impacting your running processes.\nInteresting:# egrep 'trans|thp|compact_' /proc/vmstatnr_anon_transparent_hugepages 643compact_blocks_moved 22629094\ncompact_pages_moved 532129382compact_pagemigrate_failed 0compact_stall 398051compact_fail 80453compact_success 317598thp_fault_alloc 8254106thp_fault_fallback 167286\nthp_collapse_alloc 622783thp_collapse_alloc_failed 3321thp_split 122833",
"msg_date": "Wed, 6 Feb 2013 14:47:54 -0500",
"msg_from": "Johnny Tan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Tue, Feb 5, 2013 at 2:02 PM, Johnny Tan <[email protected]> wrote:\n\n> checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\n\nI always set this to 0.9. I don't know why the default is 0.5.\n\n\n> But periodically, there are spikes in our app's db response time. Normally,\n> the app's db response time hovers in the 100ms range for most of the day.\n> During the spike times, it can go up to 1000ms or 1500ms, and the number of\n> pg connections goes to 140 (maxed out to pgbouncer's limit, where normally\n> it's only about 20-40 connections).\n\nWhat if you lower the pgbouncer limit to 40?\n\nIt is hard to know if the latency spikes cause the connection build\nup, or if the connection build up cause the latency spikes, or if they\nreinforce each other in a vicious circle. But making the connections\nwait in pgbouncer's queue rather than in the server should do no harm,\nand very well might help.\n\n> Also, during these times, which usually\n> last less than 2 minutes, we will see several thousand queries in the pg log\n> (this is with log_min_duration_statement = 500), compared to maybe one or\n> two dozen 500ms+ queries in non-spike times.\n\nIs the nature of the queries the same, just the duration that changes?\n Or are the queries of a different nature?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Feb 2013 12:12:42 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "I originally got started down that trail because running perf top while\nhaving some of the slow query issues showed compaction_alloc at the top of\nthe list. That function is the THP page compaction which lead me to some\npages like:\nhttp://www.olivierdoucet.info/blog/2012/05/19/debugging-a-mysql-stall/\nhttp://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadoop-workloads/\nhttps://gist.github.com/fgbreel/4454559\nOnly the last is around pg, but I think they all may still be applicable.\n\n The kernel docs in Documentation/vm/transhuge.txt have an explanation of\nthe metrics\n\nWe hadn't been having the issue that much until a few weeks ago, when we\nstarted using the rest of our free memory for page cache.. my thoughts were\nif we have no more memory that's totally free, it might be doing compaction\nmore. That lead me to find how often compaction is happening, but like I\nsaid I don't know how to tell how *long* it's happening - someone who knows\nsystemtap better than I might be able to help with the collection of that\ninfo assuming the right systemtap events are there.\n\nOne thing to keep in mind is the page you linked to (\nhttp://www.pythian.com/blog/performance-tuning-hugepages-in-linux/) talks\nmostly about *regular* large pages, which are related but different than\nTHP.\n\nI have yet to be able to prove THP's involvement one way or the other, but\nwe are going to try some things on a test box to zero in on it.\n\nI originally got started down that trail because running perf top while having some of the slow query issues showed compaction_alloc at the top of the list. That function is the THP page compaction which lead me to some pages like:\nhttp://www.olivierdoucet.info/blog/2012/05/19/debugging-a-mysql-stall/http://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadoop-workloads/\nhttps://gist.github.com/fgbreel/4454559Only the last is around pg, but I think they all may still be applicable. The kernel docs in Documentation/vm/transhuge.txt have an explanation of the metrics\nWe hadn't been having the issue that much until a few weeks ago, when we started using the rest of our free memory for page cache.. my thoughts were if we have no more memory that's totally free, it might be doing compaction more. That lead me to find how often compaction is happening, but like I said I don't know how to tell how *long* it's happening - someone who knows systemtap better than I might be able to help with the collection of that info assuming the right systemtap events are there.\nOne thing to keep in mind is the page you linked to (http://www.pythian.com/blog/performance-tuning-hugepages-in-linux/) talks mostly about *regular* large pages, which are related but different than THP.\nI have yet to be able to prove THP's involvement one way or the other, but we are going to try some things on a test box to zero in on it.",
"msg_date": "Wed, 6 Feb 2013 15:27:46 -0500",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "Johnny Tan <[email protected]> wrote:\n\n> Wouldn't this be controlled by our checkpoint settings, though? \n\nSpread checkpoints made the issue less severe, but on servers with\na lot of RAM I've had to make the above changes (or even go lower\nwith shared_buffers) to prevent a burst of writes from overwhelming\nthe RAID controllers battery-backed cache. There may be other\nthings which could cause these symptoms, so I'm not certain that\nthis will help; but I have seen this as the cause and seen the\nsuggested changes help.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Feb 2013 13:12:50 -0800 (PST)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "As others suggested having shared_buffers = 48GB is to large. You should\nnever need to go above 8GB. I have a similar server and mine has\n\nshared_buffers = 8GB\ncheckpoint_completion_target = 0.9\n\nThis looks like a problem of dirty memory being flushed to the disk. You\nshould set your monitoring to monitor dirty memory from /proc/meminfo and\ncheck if it has any correlation with the slowdowns. Also\nvm.dirty_background_bytes should always be a fraction of vm.dirty_bytes,\nsince when there is more than vm.dirty_bytes bytes dirty it will stop all\nwriting to the disk until it flushes everything, while when it reaches the\nvm.dirty_background_bytes it will slowly start flushing those pages to the\ndisk. As far as I remember vm.dirty_bytes should be configured to be a\nlittle less than the cache size of your RAID controller, while\nvm.dirty_background_bytes should be 4 times smaller.\n\n\nStrahinja Kustudić | System Engineer | Nordeus\n\n\nOn Wed, Feb 6, 2013 at 10:12 PM, Kevin Grittner <[email protected]> wrote:\n\n> Johnny Tan <[email protected]> wrote:\n>\n> > Wouldn't this be controlled by our checkpoint settings, though?\n>\n> Spread checkpoints made the issue less severe, but on servers with\n> a lot of RAM I've had to make the above changes (or even go lower\n> with shared_buffers) to prevent a burst of writes from overwhelming\n> the RAID controllers battery-backed cache. There may be other\n> things which could cause these symptoms, so I'm not certain that\n> this will help; but I have seen this as the cause and seen the\n> suggested changes help.\n>\n> -Kevin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAs others suggested having shared_buffers = 48GB is to large. You should never need to go above 8GB. I have a similar server and mine has shared_buffers = 8GB\n\ncheckpoint_completion_target = 0.9This looks like a problem of dirty memory being flushed to the disk. You should set your monitoring to monitor dirty memory from /proc/meminfo and check if it has any correlation with the slowdowns. Also vm.dirty_background_bytes should always be a fraction of vm.dirty_bytes, since when there is more than vm.dirty_bytes bytes dirty it will stop all writing to the disk until it flushes everything, while when it reaches the vm.dirty_background_bytes it will slowly start flushing those pages to the disk. As far as I remember vm.dirty_bytes should be configured to be a little less than the cache size of your RAID controller, while vm.dirty_background_bytes should be 4 times smaller.\n\nStrahinja Kustudić | System Engineer | Nordeus\n\nOn Wed, Feb 6, 2013 at 10:12 PM, Kevin Grittner <[email protected]> wrote:\nJohnny Tan <[email protected]> wrote:\n\n> Wouldn't this be controlled by our checkpoint settings, though?\n\nSpread checkpoints made the issue less severe, but on servers with\na lot of RAM I've had to make the above changes (or even go lower\nwith shared_buffers) to prevent a burst of writes from overwhelming\nthe RAID controllers battery-backed cache. There may be other\nthings which could cause these symptoms, so I'm not certain that\nthis will help; but I have seen this as the cause and seen the\nsuggested changes help.\n\n-Kevin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 7 Feb 2013 13:06:53 +0100",
"msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "I've benchmarked shared_buffers with high and low settings, in a server dedicated to postgres with 48GB my settings are: \nshared_buffers = 37GB \neffective_cache_size = 38GB\n\nHaving a small number and depending on OS caching is unpredictable, if the server is dedicated to postgres you want make sure postgres has the memory. A random unrelated process doing a cat /dev/sda1 should not destroy postgres buffers.\nI agree your problem is most related to dirty background ration, where buffers are READ only and have nothing to do with disk writes.\n\n\nFrom: [email protected]\nDate: Thu, 7 Feb 2013 13:06:53 +0100\nSubject: Re: [PERFORM] postgresql.conf recommendations\nTo: [email protected]\nCC: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]\n\nAs others suggested having shared_buffers = 48GB is to large. You should never need to go above 8GB. I have a similar server and mine has \n\nshared_buffers = 8GB\n\n\ncheckpoint_completion_target = 0.9\n\nThis looks like a problem of dirty memory being flushed to the disk. You should set your monitoring to monitor dirty memory from /proc/meminfo and check if it has any correlation with the slowdowns. Also vm.dirty_background_bytes should always be a fraction of vm.dirty_bytes, since when there is more than vm.dirty_bytes bytes dirty it will stop all writing to the disk until it flushes everything, while when it reaches the vm.dirty_background_bytes it will slowly start flushing those pages to the disk. As far as I remember vm.dirty_bytes should be configured to be a little less than the cache size of your RAID controller, while vm.dirty_background_bytes should be 4 times smaller.\n\n\n\n\nStrahinja Kustudić | System Engineer | Nordeus\n\n\n\nOn Wed, Feb 6, 2013 at 10:12 PM, Kevin Grittner <[email protected]> wrote:\n\n\nJohnny Tan <[email protected]> wrote:\n\n\n\n> Wouldn't this be controlled by our checkpoint settings, though?\n\n\n\nSpread checkpoints made the issue less severe, but on servers with\n\na lot of RAM I've had to make the above changes (or even go lower\n\nwith shared_buffers) to prevent a burst of writes from overwhelming\n\nthe RAID controllers battery-backed cache. There may be other\n\nthings which could cause these symptoms, so I'm not certain that\n\nthis will help; but I have seen this as the cause and seen the\n\nsuggested changes help.\n\n\n\n-Kevin\n\n\n\n\n\n--\n\nSent via pgsql-performance mailing list ([email protected])\n\nTo make changes to your subscription:\n\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n \t\t \t \t\t \n\n\n\nI've benchmarked shared_buffers with high and low settings, in a server dedicated to postgres with 48GB my settings are: shared_buffers = 37GB effective_cache_size = 38GBHaving a small number and depending on OS caching is unpredictable, if the server is dedicated to postgres you want make sure postgres has the memory. A random unrelated process doing a cat /dev/sda1 should not destroy postgres buffers.I agree your problem is most related to dirty background ration, where buffers are READ only and have nothing to do with disk writes.From: [email protected]: Thu, 7 Feb 2013 13:06:53 +0100Subject: Re: [PERFORM] postgresql.conf recommendationsTo: [email protected]: [email protected]; [email protected]; [email protected]; [email protected]; [email protected] others suggested having shared_buffers = 48GB is to large. You should never need to go above 8GB. I have a similar server and mine has shared_buffers = 8GB\n\ncheckpoint_completion_target = 0.9This looks like a problem of dirty memory being flushed to the disk. You should set your monitoring to monitor dirty memory from /proc/meminfo and check if it has any correlation with the slowdowns. Also vm.dirty_background_bytes should always be a fraction of vm.dirty_bytes, since when there is more than vm.dirty_bytes bytes dirty it will stop all writing to the disk until it flushes everything, while when it reaches the vm.dirty_background_bytes it will slowly start flushing those pages to the disk. As far as I remember vm.dirty_bytes should be configured to be a little less than the cache size of your RAID controller, while vm.dirty_background_bytes should be 4 times smaller.\n\nStrahinja Kustudić | System Engineer | Nordeus\n\nOn Wed, Feb 6, 2013 at 10:12 PM, Kevin Grittner <[email protected]> wrote:\nJohnny Tan <[email protected]> wrote:\n\n> Wouldn't this be controlled by our checkpoint settings, though?\n\nSpread checkpoints made the issue less severe, but on servers with\na lot of RAM I've had to make the above changes (or even go lower\nwith shared_buffers) to prevent a burst of writes from overwhelming\nthe RAID controllers battery-backed cache. There may be other\nthings which could cause these symptoms, so I'm not certain that\nthis will help; but I have seen this as the cause and seen the\nsuggested changes help.\n\n-Kevin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 7 Feb 2013 09:41:46 -0500",
"msg_from": "Charles Gomes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "I appreciate all the responses on this thread, even though some are\nconflicting :). We are going to try these one at a time, but we'll likely\nneed a day or so inbetween each to see what impact (if any), so it will\ntake time. But I will post back here our findings.\n\nWe'll start with dirty_background_bytes, as that's straightforward.\n(dirty_bytes is already set to the same size as our RAID cache.)\n\nI appreciate all the responses on this thread, even though some are conflicting :). We are going to try these one at a time, but we'll likely need a day or so inbetween each to see what impact (if any), so it will take time. But I will post back here our findings.\nWe'll start with dirty_background_bytes, as that's straightforward. (dirty_bytes is already set to the same size as our RAID cache.)",
"msg_date": "Thu, 7 Feb 2013 12:29:44 -0500",
"msg_from": "Johnny Tan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "Just as an update from my angle on the THP side... I put together a\nsystemtap script last night and so far it's confirming my theory (at least\nin our environment). I want to go through some more data and make some\nchanges on our test box to see if we can make it go away before declaring\nsuccess - it's always possible two problems are intertwined or that the THP\nthing is only showing up because of the *real* problem... you know how it\ngoes.\n\nBasically the systemtap script does this:\n- probes the compaction function\n- keeps track of the number of calls to it and aggregate time spent in it\nby process\n- at the end spit out the collected info.\n\nSo far when I run the script for a short period of time that I know THP\ncompactions are happening, I have been able to match up the compaction\nduration collected via systemtap with a query in the pg logs that took that\namount of time or slightly longer (as expected). A lot of these are only a\nsecond or so, so I haven't been able to catch everything, but at least the\ndata I am getting is consistent.\n\nWill be interested to see what you find Johnny.\n\nJust as an update from my angle on the THP side... I put together a systemtap script last night and so far it's confirming my theory (at least in our environment). I want to go through some more data and make some changes on our test box to see if we can make it go away before declaring success - it's always possible two problems are intertwined or that the THP thing is only showing up because of the *real* problem... you know how it goes.\nBasically the systemtap script does this:- probes the compaction function- keeps track of the number of calls to it and aggregate time spent in it by process- at the end spit out the collected info.\nSo far when I run the script for a short period of time that I know THP compactions are happening, I have been able to match up the compaction duration collected via systemtap with a query in the pg logs that took that amount of time or slightly longer (as expected). A lot of these are only a second or so, so I haven't been able to catch everything, but at least the data I am getting is consistent.\nWill be interested to see what you find Johnny.",
"msg_date": "Thu, 7 Feb 2013 12:49:36 -0500",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "Josh:\n\nAre you able to share your systemtap script? Our problem will be to try and\nregenerate the same amount of traffic/load that we see in production. We\ncould replay our queries, but we don't even capture a full set because it'd\nbe roughly 150GB per day.\n\njohnny\n\n\nOn Thu, Feb 7, 2013 at 12:49 PM, Josh Krupka <[email protected]> wrote:\n\n> Just as an update from my angle on the THP side... I put together a\n> systemtap script last night and so far it's confirming my theory (at least\n> in our environment). I want to go through some more data and make some\n> changes on our test box to see if we can make it go away before declaring\n> success - it's always possible two problems are intertwined or that the THP\n> thing is only showing up because of the *real* problem... you know how it\n> goes.\n>\n> Basically the systemtap script does this:\n> - probes the compaction function\n> - keeps track of the number of calls to it and aggregate time spent in it\n> by process\n> - at the end spit out the collected info.\n>\n> So far when I run the script for a short period of time that I know THP\n> compactions are happening, I have been able to match up the compaction\n> duration collected via systemtap with a query in the pg logs that took that\n> amount of time or slightly longer (as expected). A lot of these are only a\n> second or so, so I haven't been able to catch everything, but at least the\n> data I am getting is consistent.\n>\n> Will be interested to see what you find Johnny.\n>\n\nJosh:Are you able to share your systemtap script? Our problem will be to try and regenerate the same amount of traffic/load that we see in production. We could replay our queries, but we don't even capture a full set because it'd be roughly 150GB per day.\njohnnyOn Thu, Feb 7, 2013 at 12:49 PM, Josh Krupka <[email protected]> wrote:\nJust as an update from my angle on the THP side... I put together a systemtap script last night and so far it's confirming my theory (at least in our environment). I want to go through some more data and make some changes on our test box to see if we can make it go away before declaring success - it's always possible two problems are intertwined or that the THP thing is only showing up because of the *real* problem... you know how it goes.\nBasically the systemtap script does this:- probes the compaction function- keeps track of the number of calls to it and aggregate time spent in it by process- at the end spit out the collected info.\nSo far when I run the script for a short period of time that I know THP compactions are happening, I have been able to match up the compaction duration collected via systemtap with a query in the pg logs that took that amount of time or slightly longer (as expected). A lot of these are only a second or so, so I haven't been able to catch everything, but at least the data I am getting is consistent.\nWill be interested to see what you find Johnny.",
"msg_date": "Sat, 9 Feb 2013 08:19:33 -0500",
"msg_from": "Johnny Tan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Thu, Feb 7, 2013 at 11:16 PM, Tony Chan <[email protected]> wrote:\n\n> Hi,\n>\n> May I know what is your setting for OS cache?\n>\n>\nTony:\n\nWasn't sure if you were asking me, but here's the output from \"free\":\n# free\n total used free shared buffers cached\nMem: 198333224 187151280 11181944 0 155512 179589612\n-/+ buffers/cache: 7406156 190927068\nSwap: 16777208 0 16777208\n\n\n- better to analyze large joins and sequential scan, and turn this\n> parameter, e.g. reduce the size of effective_cache_size in postgresql.conf\n> and change it for big queries.\n>\n\nThis makes sense. We were setting it based on the tuning guideline from\nthis page:\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\"Setting effective_cache_size to 1/2 of total memory would be a normal\nconservative setting, and 3/4 of memory is a more aggressive but still\nreasonable amount.\"\n\njohnny\n\nOn Thu, Feb 7, 2013 at 11:16 PM, Tony Chan <[email protected]> wrote:\nHi,\nMay I know what is your setting for OS cache?Tony:Wasn't sure if you were asking me, but here's the output from \"free\":\n# free total used free shared buffers cachedMem: 198333224 187151280 11181944 0 155512 179589612-/+ buffers/cache: 7406156 190927068\nSwap: 16777208 0 16777208 \n- better to analyze large joins and sequential scan, and turn this parameter, e.g. reduce the size of effective_cache_size in postgresql.conf and change it for big queries.\nThis makes sense. We were setting it based on the tuning guideline from this page:http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\"Setting effective_cache_size to 1/2 of total memory would be a normal conservative setting, and 3/4 of memory is a more aggressive but still reasonable amount.\"johnny",
"msg_date": "Sat, 9 Feb 2013 08:24:09 -0500",
"msg_from": "Johnny Tan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <[email protected]> wrote:\n> I've benchmarked shared_buffers with high and low settings, in a server\n> dedicated to postgres with 48GB my settings are:\n> shared_buffers = 37GB\n> effective_cache_size = 38GB\n>\n> Having a small number and depending on OS caching is unpredictable, if the\n> server is dedicated to postgres you want make sure postgres has the memory.\n> A random unrelated process doing a cat /dev/sda1 should not destroy postgres\n> buffers.\n> I agree your problem is most related to dirty background ration, where\n> buffers are READ only and have nothing to do with disk writes.\n\nYou make an assertion here but do not tell us of your benchmarking\nmethods. My testing in the past has show catastrophic performance\nwith very large % of memory as postgresql buffers with heavy write\nloads, especially transactional ones. Many others on this list have\nhad the same thing happen. Also you supposed PostgreSQL has a better\n/ smarter caching algorithm than the OS kernel, and often times this\nis NOT the case.\n\nIn this particular instance the OP may not be seeing an issue from too\nlarge of a pg buffer, my point still stands, large pg_buffer can cause\nproblems with heavy or even moderate write loads.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 9 Feb 2013 07:51:30 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "Johnny,\n\nSure thing, here's the system tap script:\n\n#! /usr/bin/env stap\n\nglobal pauses, counts\n\nprobe begin {\n printf(\"%s\\n\", ctime(gettimeofday_s()))\n}\n\nprobe kernel.function(\"compaction_alloc@mm/compaction.c\").return {\n elapsed_time = gettimeofday_us() - @entry(gettimeofday_us())\n key = sprintf(\"%d-%s\", pid(), execname())\n pauses[key] = pauses[key] + elapsed_time\n counts[key]++\n}\n\nprobe end {\n printf(\"%s\\n\", ctime(gettimeofday_s()))\n foreach (pid in pauses) {\n printf(\"pid %s : %d ms %d pauses\\n\", pid, pauses[pid]/1000,\ncounts[pid])\n }\n}\n\n\nI was able to do some more observations in production, and some testing in\nthe lab, here are my latest findings:\n- The THP compaction delays aren't happening during queries (at least not\nthat I've seen yet) from the middleware our legacy app uses. The pauses\nduring those queries are what originally got my attention. Those queries\nthough only ever insert/update/read/delete one record at a time (don't\nask). Which would theoretically makes sense, since because of how that app\nworks, the pg backend processes for that app don't have to ask for as much\nmemory during a query, which is when the THP compactions would be happening.\n- The THP compaction delays are impacting backend processes that are for\nother apps, and things like autovacuum processes - sometimes multiple\nseconds worth of delay over a short period of time\n- I haven't been able to duplicate 1+s query times for our \"one record at a\ntime\" app in the lab, but I was getting some 20-30ms queries which is still\nhigher than it should be most of the time. We noticed in production by\nlooking at pg_stat_bgwriter that the backends were having to write pages\nout for 50% of the allocations, so we starting tuning checkpoint/bgwriter\nsettings on the test system and seem to be making some progress. See\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n- I think you already started looking at this, but the linux dirty memory\nsettings may have to be tuned as well (see Greg's post\nhttp://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html).\nOurs haven't been changed from the defaults, but that's another thing to\ntest for next week. Have you had any luck tuning these yet?\n\nJosh\n\nJohnny,Sure thing, here's the system tap script:#! /usr/bin/env stapglobal pauses, countsprobe begin { printf(\"%s\\n\", ctime(gettimeofday_s()))\n\n\n\n}probe kernel.function(\"compaction_alloc@mm/compaction.c\").return { elapsed_time = gettimeofday_us() - @entry(gettimeofday_us()) key = sprintf(\"%d-%s\", pid(), execname()) pauses[key] = pauses[key] + elapsed_time\n\n\n\n counts[key]++}probe end { printf(\"%s\\n\", ctime(gettimeofday_s())) foreach (pid in pauses) { printf(\"pid %s : %d ms %d pauses\\n\", pid, pauses[pid]/1000, counts[pid]) \n\n\n }\n}I was able to do some more observations in production, and some testing in the lab, here are my latest findings:- The THP compaction delays aren't happening during queries (at least not that I've seen yet) from the middleware our legacy app uses. The pauses during those queries are what originally got my attention. Those queries though only ever insert/update/read/delete one record at a time (don't ask). Which would theoretically makes sense, since because of how that app works, the pg backend processes for that app don't have to ask for as much memory during a query, which is when the THP compactions would be happening.\n- The THP compaction delays are impacting backend processes that are for other apps, and things like autovacuum processes - sometimes multiple seconds worth of delay over a short period of time- I haven't been able to duplicate 1+s query times for our \"one record at a time\" app in the lab, but I was getting some 20-30ms queries which is still higher than it should be most of the time. We noticed in production by looking at pg_stat_bgwriter that the backends were having to write pages out for 50% of the allocations, so we starting tuning checkpoint/bgwriter settings on the test system and seem to be making some progress. See http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n- I think you already started looking at this, but the linux dirty memory settings may have to be tuned as well (see Greg's post http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html). Ours haven't been changed from the defaults, but that's another thing to test for next week. Have you had any luck tuning these yet?\nJosh",
"msg_date": "Sat, 9 Feb 2013 14:37:06 -0500",
"msg_from": "Josh Krupka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <[email protected]> wrote:\n>> I've benchmarked shared_buffers with high and low settings, in a server\n>> dedicated to postgres with 48GB my settings are:\n>> shared_buffers = 37GB\n>> effective_cache_size = 38GB\n>>\n>> Having a small number and depending on OS caching is unpredictable, if the\n>> server is dedicated to postgres you want make sure postgres has the memory.\n>> A random unrelated process doing a cat /dev/sda1 should not destroy postgres\n>> buffers.\n>> I agree your problem is most related to dirty background ration, where\n>> buffers are READ only and have nothing to do with disk writes.\n>\n> You make an assertion here but do not tell us of your benchmarking\n> methods.\n\nWell, he is not the only one committing that sin.\n\n> My testing in the past has show catastrophic performance\n> with very large % of memory as postgresql buffers with heavy write\n> loads, especially transactional ones. Many others on this list have\n> had the same thing happen.\n\nPeople also have problems by setting it too low. For example, doing\nbulk loads into indexed tables becomes catastrophically bad when the\nsize of the index exceeds shared_buffers by too much (where \"too much\"\ndepends on kernel, IO subsystem, and settings of vm.dirty* ) , and\nincreasing shared_buffers up to 80% of RAM fixes that (if 80% of RAM\nis large enough to hold the indexes being updated).\n\nOf course when doing bulk loads into truncated tables, you should drop\nthe indexes. But if bulk loading into live tables, that is often a\ncure worse than the disease.\n\n> Also you supposed PostgreSQL has a better\n> / smarter caching algorithm than the OS kernel, and often times this\n> is NOT the case.\n\nEven if it is not smarter as an algorithm, it might still be better to\nuse it. For example, \"heap_blks_read\", \"heap_blks_hit\", and friends\nbecome completely useless if most block \"reads\" are not actually\ncoming from disk.\n\nAlso, vacuum_cost_page_miss is impossible to tune if some unknown but\npotentially large fraction of those misses are not really misses, and\nthat fraction changes from table to table, and from wrap-around scan\nto vm scan on the same table.\n\n> In this particular instance the OP may not be seeing an issue from too\n> large of a pg buffer, my point still stands, large pg_buffer can cause\n> problems with heavy or even moderate write loads.\n\nSure, but that can go the other way as well. What additional\ninstrumentation is needed so that people can actually know which is\nthe case for them?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 9 Feb 2013 12:16:46 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <[email protected]> wrote:\n> On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <[email protected]> wrote:\n>> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <[email protected]> wrote:\n>>> I've benchmarked shared_buffers with high and low settings, in a server\n>>> dedicated to postgres with 48GB my settings are:\n>>> shared_buffers = 37GB\n>>> effective_cache_size = 38GB\n>>>\n>>> Having a small number and depending on OS caching is unpredictable, if the\n>>> server is dedicated to postgres you want make sure postgres has the memory.\n>>> A random unrelated process doing a cat /dev/sda1 should not destroy postgres\n>>> buffers.\n>>> I agree your problem is most related to dirty background ration, where\n>>> buffers are READ only and have nothing to do with disk writes.\n>>\n>> You make an assertion here but do not tell us of your benchmarking\n>> methods.\n>\n> Well, he is not the only one committing that sin.\n\nI'm not asking for a complete low level view. but it would be nice to\nknow if he's benchmarking heavy read or write loads, lots of users, a\nfew users, something. All we get is \"I've benchmarked a lot\" followed\nby \"don't let the OS do the caching.\" At least with my testing I was\nusing a large transactional system (heavy write) and there I KNOW from\ntesting that large shared_buffers do nothing but get in the way.\n\nall the rest of the stuff you mention is why we have effective cache\nsize which tells postgresql about how much of the data CAN be cached.\nIn short, postgresql is designed to use and / or rely on OS cache.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 9 Feb 2013 14:03:35 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Saturday, February 9, 2013, Scott Marlowe wrote:\n\n> On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <[email protected]<javascript:;>>\n> wrote:\n> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <[email protected]<javascript:;>>\n> wrote:\n> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <[email protected]<javascript:;>>\n> wrote:\n> >>> I've benchmarked shared_buffers with high and low settings, in a server\n> >>> dedicated to postgres with 48GB my settings are:\n> >>> shared_buffers = 37GB\n> >>> effective_cache_size = 38GB\n> >>>\n> >>> Having a small number and depending on OS caching is unpredictable, if\n> the\n> >>> server is dedicated to postgres you want make sure postgres has the\n> memory.\n> >>> A random unrelated process doing a cat /dev/sda1 should not destroy\n> postgres\n> >>> buffers.\n> >>> I agree your problem is most related to dirty background ration, where\n> >>> buffers are READ only and have nothing to do with disk writes.\n> >>\n> >> You make an assertion here but do not tell us of your benchmarking\n> >> methods.\n> >\n> > Well, he is not the only one committing that sin.\n>\n> I'm not asking for a complete low level view. but it would be nice to\n> know if he's benchmarking heavy read or write loads, lots of users, a\n> few users, something. All we get is \"I've benchmarked a lot\" followed\n> by \"don't let the OS do the caching.\" At least with my testing I was\n> using a large transactional system (heavy write) and there I KNOW from\n> testing that large shared_buffers do nothing but get in the way.\n>\n\nCan you see this with pgbench workloads? (it is certainly write heavy)\n\nI've tried to reproduce these problems, and was never able to.\n\n\n>\n> all the rest of the stuff you mention is why we have effective cache\n> size which tells postgresql about how much of the data CAN be cached.\n>\n\nThe effective_cache_size setting does not figure into any of the things I\nmentioned.\n\nCheers,\n\nJeff\n\nOn Saturday, February 9, 2013, Scott Marlowe wrote:On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <[email protected]> wrote:\n\n> On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <[email protected]> wrote:\n>> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <[email protected]> wrote:\n>>> I've benchmarked shared_buffers with high and low settings, in a server\n>>> dedicated to postgres with 48GB my settings are:\n>>> shared_buffers = 37GB\n>>> effective_cache_size = 38GB\n>>>\n>>> Having a small number and depending on OS caching is unpredictable, if the\n>>> server is dedicated to postgres you want make sure postgres has the memory.\n>>> A random unrelated process doing a cat /dev/sda1 should not destroy postgres\n>>> buffers.\n>>> I agree your problem is most related to dirty background ration, where\n>>> buffers are READ only and have nothing to do with disk writes.\n>>\n>> You make an assertion here but do not tell us of your benchmarking\n>> methods.\n>\n> Well, he is not the only one committing that sin.\n\nI'm not asking for a complete low level view. but it would be nice to\nknow if he's benchmarking heavy read or write loads, lots of users, a\nfew users, something. All we get is \"I've benchmarked a lot\" followed\nby \"don't let the OS do the caching.\" At least with my testing I was\nusing a large transactional system (heavy write) and there I KNOW from\ntesting that large shared_buffers do nothing but get in the way.Can you see this with pgbench workloads? (it is certainly write heavy)I've tried to reproduce these problems, and was never able to.\n \n\nall the rest of the stuff you mention is why we have effective cache\nsize which tells postgresql about how much of the data CAN be cached.The effective_cache_size setting does not figure into any of the things I mentioned. Cheers,\nJeff",
"msg_date": "Sat, 9 Feb 2013 20:02:55 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "> Date: Sat, 9 Feb 2013 14:03:35 -0700\r\n> Subject: Re: [PERFORM] postgresql.conf recommendations\r\n> From: [email protected]\r\n> To: [email protected]\r\n> CC: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]\r\n> \r\n> On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <[email protected]> wrote:\r\n> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <[email protected]> wrote:\r\n> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <[email protected]> wrote:\r\n> >>> I've benchmarked shared_buffers with high and low settings, in a server\r\n> >>> dedicated to postgres with 48GB my settings are:\r\n> >>> shared_buffers = 37GB\r\n> >>> effective_cache_size = 38GB\r\n> >>>\r\n> >>> Having a small number and depending on OS caching is unpredictable, if the\r\n> >>> server is dedicated to postgres you want make sure postgres has the memory.\r\n> >>> A random unrelated process doing a cat /dev/sda1 should not destroy postgres\r\n> >>> buffers.\r\n> >>> I agree your problem is most related to dirty background ration, where\r\n> >>> buffers are READ only and have nothing to do with disk writes.\r\n> >>\r\n> >> You make an assertion here but do not tell us of your benchmarking\r\n> >> methods.\r\n> >\r\n> > Well, he is not the only one committing that sin.\r\n> \r\n> I'm not asking for a complete low level view. but it would be nice to\r\n> know if he's benchmarking heavy read or write loads, lots of users, a\r\n> few users, something. All we get is \"I've benchmarked a lot\" followed\r\n> by \"don't let the OS do the caching.\" At least with my testing I was\r\n> using a large transactional system (heavy write) and there I KNOW from\r\n> testing that large shared_buffers do nothing but get in the way.\r\n> \r\n> all the rest of the stuff you mention is why we have effective cache\r\n> size which tells postgresql about how much of the data CAN be cached.\r\n> In short, postgresql is designed to use and / or rely on OS cache.\r\n> \r\nHello Scott\r\n\n\nI've tested using 8 bulk writers in a 8 core machine (16 Threads).\n\nI've loaded a database with 17 partitions, total 900 million rows and later\nexecuted single queries on it.\n\nIn my case the main point of having postgres manage memory is because postgres is\nthe single and most important application running on the server.\n\n \n\nIf Linux would manage the Cache it would not know what is important and\nwhat should be discarded, it would simply discard the oldest least accessed\nentry.\n\nLet's say a DBA logs in the server and copies a 20GB file. If you leave\nLinux to decide, it will decide that the 20GB file is more important than the old not so\nheavily accessed postgres entries. \n\n \n\nThis may be looked in a case by case, in my case I need PostgreSQL\nto perform FAST and I also don't want cron jobs taking my cache out. For\nexample (locate, logrotate, prelink, makewhatis).\n\n \n\nIf postgres was unable to manage 40GB of RAM, we would get into major\nproblems because nowadays it's normal to buy 64GB servers, and many of Us have dealt with 512GB Ram Servers.\n\n \n\nBy the way, I've tested this same scenario with Postgres, Mysql and\nOracle. And Postgres have given the best results overall. Especially with\nsymmetric replication turned on.\n\n\r\n> \r\n> -- \r\n> Sent via pgsql-performance mailing list ([email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n \t\t \t \t\t \n\n\n\n> Date: Sat, 9 Feb 2013 14:03:35 -0700> Subject: Re: [PERFORM] postgresql.conf recommendations> From: [email protected]> To: [email protected]> CC: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]> > On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <[email protected]> wrote:> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <[email protected]> wrote:> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <[email protected]> wrote:> >>> I've benchmarked shared_buffers with high and low settings, in a server> >>> dedicated to postgres with 48GB my settings are:> >>> shared_buffers = 37GB> >>> effective_cache_size = 38GB> >>>> >>> Having a small number and depending on OS caching is unpredictable, if the> >>> server is dedicated to postgres you want make sure postgres has the memory.> >>> A random unrelated process doing a cat /dev/sda1 should not destroy postgres> >>> buffers.> >>> I agree your problem is most related to dirty background ration, where> >>> buffers are READ only and have nothing to do with disk writes.> >>> >> You make an assertion here but do not tell us of your benchmarking> >> methods.> >> > Well, he is not the only one committing that sin.> > I'm not asking for a complete low level view. but it would be nice to> know if he's benchmarking heavy read or write loads, lots of users, a> few users, something. All we get is \"I've benchmarked a lot\" followed> by \"don't let the OS do the caching.\" At least with my testing I was> using a large transactional system (heavy write) and there I KNOW from> testing that large shared_buffers do nothing but get in the way.> > all the rest of the stuff you mention is why we have effective cache> size which tells postgresql about how much of the data CAN be cached.> In short, postgresql is designed to use and / or rely on OS cache.> Hello Scott\nI've tested using 8 bulk writers in a 8 core machine (16 Threads).\nI've loaded a database with 17 partitions, total 900 million rows and later\nexecuted single queries on it.\nIn my case the main point of having postgres manage memory is because postgres is\nthe single and most important application running on the server.\n \nIf Linux would manage the Cache it would not know what is important and\nwhat should be discarded, it would simply discard the oldest least accessed\nentry.\nLet's say a DBA logs in the server and copies a 20GB file. If you leave\nLinux to decide, it will decide that the 20GB file is more important than the old not so\nheavily accessed postgres entries. \n \nThis may be looked in a case by case, in my case I need PostgreSQL\nto perform FAST and I also don't want cron jobs taking my cache out. For\nexample (locate, logrotate, prelink, makewhatis).\n \nIf postgres was unable to manage 40GB of RAM, we would get into major\nproblems because nowadays it's normal to buy 64GB servers, and many of Us have dealt with 512GB Ram Servers.\n \nBy the way, I've tested this same scenario with Postgres, Mysql and\nOracle. And Postgres have given the best results overall. Especially with\nsymmetric replication turned on.\n> > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 11 Feb 2013 09:57:36 -0500",
"msg_from": "Charles Gomes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Sat, Feb 9, 2013 at 2:37 PM, Josh Krupka <[email protected]> wrote:\n\n> Johnny,\n>\n> Sure thing, here's the system tap script:\n>\n>\nThank you for this!\n\n\n\n> - I think you already started looking at this, but the linux dirty memory\n> settings may have to be tuned as well (see Greg's post\n> http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html).\n> Ours haven't been changed from the defaults, but that's another thing to\n> test for next week. Have you had any luck tuning these yet?\n>\n\nWe lowered dirty_background_bytes to 1/4 of what dirty_bytes was. That\ndidn't get rid of the spikes, but seemed to have some impact -- just from\n24 hours' observation, the spikes were clustered together more closely, and\nthen there were long stretches without any spikes. Unfortunately, we only\ngot to observe 24 hours before we made the next change.\n\nThe next change was lowering pgbouncer poolsize down to 50. We originally\n(way back when) started out at 100, then bumped up to 150. But Jeff Janes'\nrationale for LOWERING the poolsize/connections made sense to us.\n\nAnd so far, 48 hours since lowering it, it does seem to have eliminated the\nDB spikes! We haven't seen one yet, and this is the longest we've gone\nwithout seeing one.\n\nTo be more precise, we now have a lot more \"short\" spikes -- i.e., our\nresponse time graphs are more jagged, but at least the peaks are under the\nthreshold we desire. Previously, they were \"smooth\" in between the spikes.\n\nWe will probably tweak this knob some more -- i.e., what is the sweet spot\nbetween 1 and 100? Would it be higher than 50 but less than 100? Or is it\nsomewhere lower than 50?\n\nEven after we find that sweet spot, I'm still going to try some of the\nother suggestions. I do want to play with shared_buffers, just so we know\nwhether, for our setup, it's better to have larger or smaller\nshared_buffers. I'd also like to test the THP stuff on a testing cluster,\nwhich we are still in the middle of setting up (or rather, we have set up,\nbut we need to make it more prod-like).\n\njohnny\n\nOn Sat, Feb 9, 2013 at 2:37 PM, Josh Krupka <[email protected]> wrote:\nJohnny,Sure thing, here's the system tap script:\nThank you for this! \n- I think you already started looking at this, but the linux dirty memory settings may have to be tuned as well (see Greg's post http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html). Ours haven't been changed from the defaults, but that's another thing to test for next week. Have you had any luck tuning these yet?\nWe lowered dirty_background_bytes to 1/4 of what dirty_bytes was. That didn't get rid of the spikes, but seemed to have some impact -- just from 24 hours' observation, the spikes were clustered together more closely, and then there were long stretches without any spikes. Unfortunately, we only got to observe 24 hours before we made the next change.\nThe next change was lowering pgbouncer poolsize down to 50. We originally (way back when) started out at 100, then bumped up to 150. But Jeff Janes' rationale for LOWERING the poolsize/connections made sense to us.\nAnd so far, 48 hours since lowering it, it does seem to have eliminated the DB spikes! We haven't seen one yet, and this is the longest we've gone without seeing one.\nTo be more precise, we now have a lot more \"short\" spikes -- i.e., our response time graphs are more jagged, but at least the peaks are under the threshold we desire. Previously, they were \"smooth\" in between the spikes.\nWe will probably tweak this knob some more -- i.e., what is the sweet spot between 1 and 100? Would it be higher than 50 but less than 100? Or is it somewhere lower than 50?\nEven after we find that sweet spot, I'm still going to try some of the other suggestions. I do want to play with shared_buffers, just so we know whether, for our setup, it's better to have larger or smaller shared_buffers. I'd also like to test the THP stuff on a testing cluster, which we are still in the middle of setting up (or rather, we have set up, but we need to make it more prod-like).\njohnny",
"msg_date": "Mon, 11 Feb 2013 17:10:47 -0500",
"msg_from": "Johnny Tan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Mon, Feb 11, 2013 at 7:57 AM, Charles Gomes <[email protected]> wrote:\n>\n>\n>> Date: Sat, 9 Feb 2013 14:03:35 -0700\n>\n>> Subject: Re: [PERFORM] postgresql.conf recommendations\n>> From: [email protected]\n>> To: [email protected]\n>> CC: [email protected]; [email protected]; [email protected];\n>> [email protected]; [email protected]; [email protected]; [email protected];\n>> [email protected]\n>\n>>\n>> On Sat, Feb 9, 2013 at 1:16 PM, Jeff Janes <[email protected]> wrote:\n>> > On Sat, Feb 9, 2013 at 6:51 AM, Scott Marlowe <[email protected]>\n>> > wrote:\n>> >> On Thu, Feb 7, 2013 at 7:41 AM, Charles Gomes <[email protected]>\n>> >> wrote:\n>> >>> I've benchmarked shared_buffers with high and low settings, in a\n>> >>> server\n>> >>> dedicated to postgres with 48GB my settings are:\n>> >>> shared_buffers = 37GB\n>> >>> effective_cache_size = 38GB\n>> >>>\n>> >>> Having a small number and depending on OS caching is unpredictable, if\n>> >>> the\n>> >>> server is dedicated to postgres you want make sure postgres has the\n>> >>> memory.\n>> >>> A random unrelated process doing a cat /dev/sda1 should not destroy\n>> >>> postgres\n>> >>> buffers.\n>> >>> I agree your problem is most related to dirty background ration, where\n>> >>> buffers are READ only and have nothing to do with disk writes.\n>> >>\n>> >> You make an assertion here but do not tell us of your benchmarking\n>> >> methods.\n>> >\n>> > Well, he is not the only one committing that sin.\n>>\n>> I'm not asking for a complete low level view. but it would be nice to\n>> know if he's benchmarking heavy read or write loads, lots of users, a\n>> few users, something. All we get is \"I've benchmarked a lot\" followed\n>> by \"don't let the OS do the caching.\" At least with my testing I was\n>> using a large transactional system (heavy write) and there I KNOW from\n>> testing that large shared_buffers do nothing but get in the way.\n>>\n>> all the rest of the stuff you mention is why we have effective cache\n>> size which tells postgresql about how much of the data CAN be cached.\n>> In short, postgresql is designed to use and / or rely on OS cache.\n>>\n> Hello Scott\n>\n> I've tested using 8 bulk writers in a 8 core machine (16 Threads).\n>\n> I've loaded a database with 17 partitions, total 900 million rows and later\n> executed single queries on it.\n>\n> In my case the main point of having postgres manage memory is because\n> postgres is the single and most important application running on the server.\n>\n>\n>\n> If Linux would manage the Cache it would not know what is important and what\n> should be discarded, it would simply discard the oldest least accessed\n> entry.\n\nPoint taken however,\n\n> Let's say a DBA logs in the server and copies a 20GB file. If you leave\n> Linux to decide, it will decide that the 20GB file is more important than\n> the old not so heavily accessed postgres entries.\n\nThe linux kernel (and most other unix kernels) don't cache that way.\nThey're usually quite smart about caching. While some older things\nmight get pushed out, it doesn't generally make room for larger files\nthat have been accessed just once. But on a mixed load server this\nmay not be the case.\n\n> If postgres was unable to manage 40GB of RAM, we would get into major\n> problems because nowadays it's normal to buy 64GB servers, and many of Us\n> have dealt with 512GB Ram Servers.\n\nIt's not that postgres can't hadndle large cache, it's that quite\noften the kernel is simply better at it.\n\n> By the way, I've tested this same scenario with Postgres, Mysql and Oracle.\n> And Postgres have given the best results overall. Especially with symmetric\n> replication turned on.\n\nGood to know. In the past PostgreSQL has had some performance issues\nwith large shared_buffer values, and this is still apparently the case\nwhen run on windows. With dedicated linux servers running just\npostgres, letting the kernel handle cache has yielded very good\nresults. Most of the negative aspects on large buffers I've seen has\nbeen on heavy write / high transactional dbs.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Feb 2013 15:57:17 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "> We will probably tweak this knob some more -- i.e., what is the sweet spot between 1 and 100? Would it be higher than 50 but less than 100? Or is it somewhere lower than 50?\n> \n> \n\nI would love to know the answer to this as well. We have a similar situation, pgbouncer with transaction log pooling with 140 connections. What is the the right value to size pgbouncer connections to? Is there a formula that takes the # of cores into account?\n\nWe will probably tweak this knob some more -- i.e., what is the sweet spot between 1 and 100? Would it be higher than 50 but less than 100? Or is it somewhere lower than 50?I would love to know the answer to this as well. We have a similar situation, pgbouncer with transaction log pooling with 140 connections. What is the the right value to size pgbouncer connections to? Is there a formula that takes the # of cores into account?",
"msg_date": "Mon, 11 Feb 2013 18:29:32 -0500",
"msg_from": "Will Platnick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
},
{
"msg_contents": "On Mon, Feb 11, 2013 at 4:29 PM, Will Platnick <[email protected]> wrote:\n> We will probably tweak this knob some more -- i.e., what is the sweet spot\n> between 1 and 100? Would it be higher than 50 but less than 100? Or is it\n> somewhere lower than 50?\n>\n> I would love to know the answer to this as well. We have a similar\n> situation, pgbouncer with transaction log pooling with 140 connections.\n> What is the the right value to size pgbouncer connections to? Is there a\n> formula that takes the # of cores into account?\n\nIf you can come up with a synthetic benchmark that's similar to what\nyour real load is (size, mix etc) then you can test it and see at what\nnumber your throughput peaks and you have good behavior from the\nserver.\n\nOn a server I built a few years back with 48 AMD cores and 24 Spinners\nin a RAID-10 for data and 4 drives for a RAID-10 for pg_xlog (no RAID\ncontroller in this one as the chassis cooked them) my throughput\npeaked at ~60 connections. What you'll wind up with is a graph where\nthe throughput keeps climbing as you add clients and at some point it\nwill usually drop off quickly when you pass it. The sharper the drop\nthe more dangerous it is to run your server in such an overloaded\nsituation.\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Feb 2013 19:54:57 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf recommendations"
}
] |
[
{
"msg_contents": "Hi,\n\nI have problems with the performance of FTS in a query like this:\n\n SELECT * FROM FullTextSearch WHERE content_tsv_gin @@\nplainto_tsquery('english', 'good');\n\nIt's slow (> 30 sec.) for some GB (27886 html files, originally 73 MB zipped).\nThe planner obviously always chooses table scan: http://explain.depesz.com/s/EEE\nI have to check again, if I'm doing something wrong but I'm pretty\nsure it has to do with de-toasting and (wrong?) cost estimations.\n\nI've seen some comments here saying that estimating detoasting costs\n(especially with operator \"@@\" and GIN index) is an open issue (since\nyears?).\nAnd I found a nice blog here [1] which uses 9.2/9.1 and proposes to\ndisable sequential table scan (SET enable_seqscan off;). But this is\nno option for me since other queries still need seqscan.\nCan anyone tell me if is on some agenda here (e.g. as an open item for >9.2)?\n\nYours, Stefan\n\n[1] http://palominodb.com/blog/2012/03/06/considerations-about-text-searchs-big-fields-and-planner-costs\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Feb 2013 01:52:46 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "FTS performance issue probably due to wrong planner estimate of\n\tdetoasting"
},
{
"msg_contents": "Hello\n\nyou can try to wrap searching to immutable function and use following trick\n\nhttp://postgres.cz/wiki/PostgreSQL_SQL_Tricks#Using_IMMUTABLE_functions_as_hints_for_the_optimizer\n\nRegards\n\nPavel Stehule\n\n2013/2/8 Stefan Keller <[email protected]>:\n> Hi,\n>\n> I have problems with the performance of FTS in a query like this:\n>\n> SELECT * FROM FullTextSearch WHERE content_tsv_gin @@\n> plainto_tsquery('english', 'good');\n>\n> It's slow (> 30 sec.) for some GB (27886 html files, originally 73 MB zipped).\n> The planner obviously always chooses table scan: http://explain.depesz.com/s/EEE\n> I have to check again, if I'm doing something wrong but I'm pretty\n> sure it has to do with de-toasting and (wrong?) cost estimations.\n>\n> I've seen some comments here saying that estimating detoasting costs\n> (especially with operator \"@@\" and GIN index) is an open issue (since\n> years?).\n> And I found a nice blog here [1] which uses 9.2/9.1 and proposes to\n> disable sequential table scan (SET enable_seqscan off;). But this is\n> no option for me since other queries still need seqscan.\n> Can anyone tell me if is on some agenda here (e.g. as an open item for >9.2)?\n>\n> Yours, Stefan\n>\n> [1] http://palominodb.com/blog/2012/03/06/considerations-about-text-searchs-big-fields-and-planner-costs\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Feb 2013 06:45:57 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FTS performance issue probably due to wrong planner\n\testimate of detoasting"
},
{
"msg_contents": "On 08/02/13 01:52, Stefan Keller wrote:\n> Hi,\n>\n> I have problems with the performance of FTS in a query like this:\n>\n> SELECT * FROM FullTextSearch WHERE content_tsv_gin @@\n> plainto_tsquery('english', 'good');\n>\n> It's slow (> 30 sec.) for some GB (27886 html files, originally 73 MB zipped).\n> The planner obviously always chooses table scan: http://explain.depesz.com/s/EEE\n> I have to check again, if I'm doing something wrong but I'm pretty\n> sure it has to do with de-toasting and (wrong?) cost estimations.\nIf you havent done it .. bump up statistics target on the column and\nre-analyze, see what that gives.\n\nI have also been playing with the cost-numbers in order to get it to favour\nan index-scan more often. That is lowering random_page_cost to be close to\nseq_page_cost, dependent on your system, the amount of memory, etc, then\nthis can have negative side-effects on non-gin-queries.\n\n-- \nJesper\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 08 Feb 2013 07:56:03 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FTS performance issue probably due to wrong planner\n\testimate of detoasting"
},
{
"msg_contents": "Hi Jesper and Pavel\n\nThx for your hints.\nI'm rather reluctant in tuning with unwanted side effects, We'll see.\nI have to setup my system and db again before I can try out your tricks.\n\nYours, Stefan\n\n2013/2/8 Jesper Krogh <[email protected]>:\n> On 08/02/13 01:52, Stefan Keller wrote:\n>>\n>> Hi,\n>>\n>> I have problems with the performance of FTS in a query like this:\n>>\n>> SELECT * FROM FullTextSearch WHERE content_tsv_gin @@\n>> plainto_tsquery('english', 'good');\n>>\n>> It's slow (> 30 sec.) for some GB (27886 html files, originally 73 MB\n>> zipped).\n>> The planner obviously always chooses table scan:\n>> http://explain.depesz.com/s/EEE\n>> I have to check again, if I'm doing something wrong but I'm pretty\n>> sure it has to do with de-toasting and (wrong?) cost estimations.\n>\n> If you havent done it .. bump up statistics target on the column and\n> re-analyze, see what that gives.\n>\n> I have also been playing with the cost-numbers in order to get it to favour\n> an index-scan more often. That is lowering random_page_cost to be close to\n> seq_page_cost, dependent on your system, the amount of memory, etc, then\n> this can have negative side-effects on non-gin-queries.\n>\n> --\n> Jesper\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Feb 2013 18:30:06 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FTS performance issue probably due to wrong planner\n\testimate of detoasting"
}
] |
[
{
"msg_contents": "This query http://pgsql.privatepaste.com/359bed8e9e gets executed every \n500 ms and normally completes really quickly \nhttp://explain.depesz.com/s/poVQ, but the more job_batches table \n(http://pgsql.privatepaste.com/eaf6d63fd2) gets used, the slower this \nquery gets, to the point where it takes minutes to execute \nhttp://explain.depesz.com/s/rDJ.\n\nAnalyzing job_batches table resolves the issue immediately. This table \ngets about a thousand new records an hour that are also updated three \ntimes as the status changes. No deletion is occurring.\n\nI've tried changing autovacuum_analyze_scale_factor as well as setting \njob_batches table to auto analyze every 500 changes (by setting scale \nfactor to 0 and threshold to 500), but I still keep running into that \nissue, sometimes minutes after the table was analyzed.\n\nI checked pg_locks to see if anything had granted=false, but that \ndoesn't seem to be the case.\n\nThis issue is occurring on two separate instances 9.0.4 and 9.1.4 - both \nhave nearly identical settings, just run on a different hardware.\n\nConfig changes http://pgsql.privatepaste.com/8acfb9d136\n\nAny ideas what is going wrong here?\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Feb 2013 12:36:43 +0000",
"msg_from": "Karolis Pocius <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query even with aggressive auto analyze"
},
{
"msg_contents": "On Friday, February 08, 2013 6:06 PM Karolis Pocius wrote:\n\n\n> I've tried changing autovacuum_analyze_scale_factor as well as setting\n> job_batches table to auto analyze every 500 changes (by setting scale\n> factor to 0 and threshold to 500), but I still keep running into that\n> issue, sometimes minutes after the table was analyzed.\n\n> I checked pg_locks to see if anything had granted=false, but that\n>doesn't seem to be the case.\n\n> This issue is occurring on two separate instances 9.0.4 and 9.1.4 - both\n> have nearly identical settings, just run on a different hardware.\n\n> Config changes http://pgsql.privatepaste.com/8acfb9d136\n\n> Any ideas what is going wrong here?\n\nI think you can verify in Logs whether analyze is happening as per your expectation.\nYou can set log_autovacuum_min_duration = 0, so that auto_analyze can be logged everytime it happens.\n\nWith Regards,\nAmit Kapila.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 10 Feb 2013 06:52:25 +0000",
"msg_from": "Amit kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query even with aggressive auto analyze"
}
] |
[
{
"msg_contents": "I was wondering if somebody could clear up how tablespaces are used.\nLet's say I have three classes of storage:\n- ramdisk (tmpfs)\n- SSD\n- spinning rust\n\nFurthermore, let's say I'd like to be able to tell postgresql to\nprefer them - in that order - until they each get full. IE, use tmpfs\nuntil it reports ENOSPC and then fall back to SSD, finally falling\nback to spinning rust. Is there a way to do this?\n\n-- \nJon\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Feb 2013 14:52:37 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "temp tablespaces and SSDs, etc.."
},
{
"msg_contents": "On Fri, Feb 8, 2013 at 02:52:37PM -0600, Jon Nelson wrote:\n> I was wondering if somebody could clear up how tablespaces are used.\n> Let's say I have three classes of storage:\n> - ramdisk (tmpfs)\n> - SSD\n> - spinning rust\n> \n> Furthermore, let's say I'd like to be able to tell postgresql to\n> prefer them - in that order - until they each get full. IE, use tmpfs\n> until it reports ENOSPC and then fall back to SSD, finally falling\n> back to spinning rust. Is there a way to do this?\n\nNope.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Feb 2013 19:32:38 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temp tablespaces and SSDs, etc.."
}
] |
[
{
"msg_contents": "Start off, I'm new to postgres. I'm running Ubuntu 10.04.04 with postgres 9.1 on a VM with 32 GB of RAM.\n\nI'm trying to increase the response time on submitted queries. I'm comparing the same queries to a SQL Server instance with the same data sets.\n\nThe queries are used in our Analytics environments.\nThe parameters we have modified on postgresql.confi are the following:\n\n* shared_buffers = 24MB to 512 MB to 4GB\n\n* work_mem = 1MB to 512MB to 1GB\n\n* maintenance_work = 16MB to 1GB\n\n* temp_buffers = 8MB to 1GB\n\n* effective_cache_size = 128MB to 1024MB to 16GB\n\n* autovacuum = on to off\n\nCurrently the queries are running about 4x slower on postgres than sql server. Is there any settings I need to check?\n\nThanks\nKristian\n\nStart off, I’m new to postgres. I’m running Ubuntu 10.04.04 with postgres 9.1 on a VM with 32 GB of RAM. I’m trying to increase the response time on submitted queries. I’m comparing the same queries to a SQL Server instance with the same data sets. The queries are used in our Analytics environments.The parameters we have modified on postgresql.confi are the following:· shared_buffers = 24MB to 512 MB to 4GB· work_mem = 1MB to 512MB to 1GB· maintenance_work = 16MB to 1GB· temp_buffers = 8MB to 1GB· effective_cache_size = 128MB to 1024MB to 16GB· autovacuum = on to off Currently the queries are running about 4x slower on postgres than sql server. Is there any settings I need to check? ThanksKristian",
"msg_date": "Fri, 8 Feb 2013 16:13:29 -0600",
"msg_from": "\"Foster, Kristian B (HSC)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Time is Slow"
},
{
"msg_contents": "\"Foster, Kristian B (HSC)\" <[email protected]> wrote:\n\n> Currently the queries are running about 4x slower on postgres than\n> sql server. Is there any settings I need to check?\n\nYour best bet would be to pick one query and follow the steps\nrecommended here:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nThat is likely to yield good advice which will help all queries,\nbut if anything remains slow after the first one is sorted out,\npick another.\n\n-Kevin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Feb 2013 14:45:59 -0800 (PST)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Time is Slow"
}
] |
[
{
"msg_contents": "I managed to optimize the query below, adding a subselect (is disabled) \nafter the \"where\" it would be correct?\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT CONCAT_WS(' # ',\nCONCAT_WS(' ', produto.descricao, produto_watts.descricao, \nproduto_serie.descricao, produto_tecnologia.descricao, \nproduto_modelo.descricao),\nproduto_cliente.descricao) as produto_descricao,\nproduto.id as produto_id,\nproduto.codigo as produto_codigo,\nproduto.descricao2 as descricao2,\nproduto.descricao_extendida as produto_descricao_extendida,\nproduto.referencia as referencia,\nestoque.id as estoque_id,\nestoque.localizacao as localizacao,\nestoque.data_disponibilidade as data_disponibilidade,\nestoque.quantidade - estoque.quantidade_temporaria as quantidade_disponivel,\nestoque.quantidade as quantidade_estoque,\nestoque.quantidade_temporaria as quantidade_temporaria,\nproduto_unidade_medida.casa_decimal as casa_decimal,\nproduto_grade.descricao as grade,\nfinanceiro_moeda_cotacao.preco_venda as cotacao,\npreco.moeda_id as moeda_id,\nCASE WHEN (empresa.pedido_moeda_id IS NULL AND \nfinanceiro_moeda_cotacao.preco_venda IS NOT NULL) THEN (preco.preco1 * \nfinanceiro_moeda_cotacao.preco_venda) ELSE preco.preco1 END as preco1,\nCASE WHEN (empresa.pedido_moeda_id IS NULL AND \nfinanceiro_moeda_cotacao.preco_venda IS NOT NULL) THEN (preco.preco2 * \nfinanceiro_moeda_cotacao.preco_venda) ELSE preco.preco2 END as preco2,\nCASE WHEN (empresa.pedido_moeda_id IS NULL AND \nfinanceiro_moeda_cotacao.preco_venda IS NOT NULL) THEN (preco.preco3 * \nfinanceiro_moeda_cotacao.preco_venda) ELSE preco.preco3 END as preco3,\nCASE WHEN (empresa.pedido_moeda_id IS NULL AND \nfinanceiro_moeda_cotacao.preco_venda IS NOT NULL) THEN (preco.preco4 * \nfinanceiro_moeda_cotacao.preco_venda) ELSE preco.preco4 END as preco4,\nproduto_fabricante.descricao as fabricante\nFROM produto\nINNER JOIN produto_venda ON produto.id = produto_venda.produto_id\nINNER JOIN empresa ON produto_venda.empresa_id = empresa.id\nLEFT JOIN produto_cliente ON produto_cliente.produto_id = produto.id -- \nAND produto_cliente.cliente_id = ?\nLEFT JOIN produto_fabricante ON produto.fabricante_id = \nproduto_fabricante.id\nLEFT JOIN produto_serie ON produto.serie_id = produto_serie.id\nLEFT JOIN produto_tecnologia ON produto.tecnologia_id = \nproduto_tecnologia.id\nLEFT JOIN produto_watts ON produto.watts_id = produto_watts.id\nLEFT JOIN produto_modelo ON produto.modelo_id = produto_modelo.id\nLEFT JOIN produto_classe ON produto.classe_id = produto_classe.id\nLEFT JOIN produto_familia ON produto.familia_id = produto_familia.id\nLEFT JOIN produto_porcentagem ON produto.porcentagem_id = \nproduto_porcentagem.id\nLEFT JOIN produto_unidade_medida ON produto.unidade_medida_id = \nproduto_unidade_medida.id\nLEFT JOIN preco ON preco.produto_id = produto.id\nLEFT JOIN estoque ON estoque.produto_id = produto.id\nLEFT JOIN financeiro_moeda_cotacao ON financeiro_moeda_cotacao.moeda_id \n= preco.moeda_id AND financeiro_moeda_cotacao.data = CURRENT_DATE\nLEFT JOIN produto_grade ON estoque.grade_id = produto_grade.id\nWHERE (\n produto_venda.empresa_id = 1 AND\n produto_venda.venda = 't' AND\n preco.tabela_id = 4 AND\n estoque.deposito_id = '2' AND\n estoque.desativado = 'f' AND\n (produto.id IN (SELECT produto_cliente.produto_id FROM \nproduto_cliente WHERE (COMPAT_LIKE(produto_cliente.descricao) LIKE \nCOMPAT_LIKE('broca') OR COMPAT_LIKE(produto_cliente.descricao) LIKE \nCOMPAT_LIKE('%broca%') AND cliente_id = 3680) )\n OR\n (\n --produto.id IN (\n --SELECT produto.id FROM produto WHERE\n COMPAT_LIKE(produto.codigo) LIKE COMPAT_LIKE('broca%') OR\n COMPAT_LIKE(produto.descricao) LIKE COMPAT_LIKE('%broca%') OR\n COMPAT_LIKE(produto.referencia) LIKE COMPAT_LIKE('%broca%')\n --)\n )\n )\n )\n ORDER BY produto.descricao, grade, (estoque.quantidade - \nestoque.quantidade_temporaria) desc LIMIT 10\n\nwithout subselect http://explain.depesz.com/s/Qec\nwith subselect http://explain.depesz.com/s/936a\n\nTank's\n\nAlexandre Riveira\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\nI\n managed to optimize the query\nbelow, adding a subselect\n(is disabled)\nafter the \"where\" it would be correct?\n\n\n EXPLAIN (ANALYZE, BUFFERS) SELECT CONCAT_WS(' # ',\n CONCAT_WS(' ', produto.descricao, produto_watts.descricao,\n produto_serie.descricao, produto_tecnologia.descricao,\n produto_modelo.descricao),\n produto_cliente.descricao) as produto_descricao,\n produto.id as produto_id,\n produto.codigo as produto_codigo,\n produto.descricao2 as descricao2,\n produto.descricao_extendida as produto_descricao_extendida,\n produto.referencia as referencia,\n estoque.id as estoque_id,\n estoque.localizacao as localizacao,\n estoque.data_disponibilidade as data_disponibilidade,\n estoque.quantidade - estoque.quantidade_temporaria as\n quantidade_disponivel,\n estoque.quantidade as quantidade_estoque,\n estoque.quantidade_temporaria as quantidade_temporaria,\n produto_unidade_medida.casa_decimal as casa_decimal,\n produto_grade.descricao as grade,\n financeiro_moeda_cotacao.preco_venda as cotacao,\n preco.moeda_id as moeda_id,\n CASE WHEN (empresa.pedido_moeda_id IS NULL AND\n financeiro_moeda_cotacao.preco_venda IS NOT NULL) THEN (preco.preco1\n * financeiro_moeda_cotacao.preco_venda) ELSE preco.preco1 END as\n preco1,\n CASE WHEN (empresa.pedido_moeda_id IS NULL AND\n financeiro_moeda_cotacao.preco_venda IS NOT NULL) THEN (preco.preco2\n * financeiro_moeda_cotacao.preco_venda) ELSE preco.preco2 END as\n preco2,\n CASE WHEN (empresa.pedido_moeda_id IS NULL AND\n financeiro_moeda_cotacao.preco_venda IS NOT NULL) THEN (preco.preco3\n * financeiro_moeda_cotacao.preco_venda) ELSE preco.preco3 END as\n preco3,\n CASE WHEN (empresa.pedido_moeda_id IS NULL AND\n financeiro_moeda_cotacao.preco_venda IS NOT NULL) THEN (preco.preco4\n * financeiro_moeda_cotacao.preco_venda) ELSE preco.preco4 END as\n preco4,\n produto_fabricante.descricao as fabricante\n FROM produto\n INNER JOIN produto_venda ON produto.id = produto_venda.produto_id\n INNER JOIN empresa ON produto_venda.empresa_id = empresa.id\n LEFT JOIN produto_cliente ON produto_cliente.produto_id = produto.id\n -- AND produto_cliente.cliente_id = ?\n LEFT JOIN produto_fabricante ON produto.fabricante_id =\n produto_fabricante.id\n LEFT JOIN produto_serie ON produto.serie_id = produto_serie.id\n LEFT JOIN produto_tecnologia ON produto.tecnologia_id =\n produto_tecnologia.id\n LEFT JOIN produto_watts ON produto.watts_id = produto_watts.id\n LEFT JOIN produto_modelo ON produto.modelo_id = produto_modelo.id\n LEFT JOIN produto_classe ON produto.classe_id = produto_classe.id\n LEFT JOIN produto_familia ON produto.familia_id = produto_familia.id\n LEFT JOIN produto_porcentagem ON produto.porcentagem_id =\n produto_porcentagem.id\n LEFT JOIN produto_unidade_medida ON produto.unidade_medida_id =\n produto_unidade_medida.id\n LEFT JOIN preco ON preco.produto_id = produto.id\n LEFT JOIN estoque ON estoque.produto_id = produto.id\n LEFT JOIN financeiro_moeda_cotacao ON\n financeiro_moeda_cotacao.moeda_id = preco.moeda_id AND\n financeiro_moeda_cotacao.data = CURRENT_DATE\n LEFT JOIN produto_grade ON estoque.grade_id = produto_grade.id\n WHERE (\n produto_venda.empresa_id = 1 AND\n produto_venda.venda = 't' AND\n preco.tabela_id = 4 AND\n estoque.deposito_id = '2' AND\n estoque.desativado = 'f' AND\n (produto.id IN (SELECT produto_cliente.produto_id FROM\n produto_cliente WHERE (COMPAT_LIKE(produto_cliente.descricao) LIKE\n COMPAT_LIKE('broca') OR COMPAT_LIKE(produto_cliente.descricao) LIKE\n COMPAT_LIKE('%broca%') AND cliente_id = 3680) )\n OR\n (\n --produto.id IN ( \n --SELECT produto.id FROM produto WHERE \n COMPAT_LIKE(produto.codigo) LIKE COMPAT_LIKE('broca%') OR\n COMPAT_LIKE(produto.descricao) LIKE COMPAT_LIKE('%broca%') OR\n COMPAT_LIKE(produto.referencia) LIKE COMPAT_LIKE('%broca%')\n --)\n )\n )\n )\n ORDER BY produto.descricao, grade, (estoque.quantidade -\n estoque.quantidade_temporaria) desc LIMIT 10\n\n without subselect http://explain.depesz.com/s/Qec\n with subselect http://explain.depesz.com/s/936a\n\n Tank's\n\n Alexandre Riveira",
"msg_date": "Mon, 11 Feb 2013 10:52:07 -0200",
"msg_from": "Alexandre Riveira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is it correct to optimize a query with subselect in the \"where\"?"
}
] |
[
{
"msg_contents": "Hello,\n\nWe upgraded from Ubuntu 11.04 to Ubuntu 12.04 and almost immediately obeserved increased CPU usage and significantly higher load average on our database server.\nAt the time we were on Postgres 9.0.5. We decided to upgrade to Postgres 9.2 to see if that resolves the issue, but unfortunately it did not.\n\nJust for illustration purposes, below are a few links to cpu and load graphs pre and post upgrade.\n\nhttps://s3.amazonaws.com/iqtell.ops/Load+Average+Post+Upgrade.png\nhttps://s3.amazonaws.com/iqtell.ops/Load+Average+Pre+Upgrade.png\n\nhttps://s3.amazonaws.com/iqtell.ops/Server+CPU+Post+Upgrade.png\nhttps://s3.amazonaws.com/iqtell.ops/Server+CPU+Pre+Upgrade.png\n\nWe also tried tweaking kernel parameters as mentioned here - http://www.postgresql.org/message-id/[email protected], but have not seen any improvement.\n\n\nAny advice on how to trace what could be causing the change in CPU usage and load average is appreciated.\n\nOur postgres version is:\n\nPostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nOS:\n\nLinux ip-10-189-175-25 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n\nHardware (this an Amazon Ec2 High memory quadruple extra large instance):\n\n8 core Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz\n68 GB RAM\nRAID10 with 8 drives using xfs\nDrives are EBS with provisioned IOPS, with 1000 iops each\n\nPostgres Configuration:\n\narchive_command = rsync -a %p slave:/var/lib/postgresql/replication_load/%f\narchive_mode = on\ncheckpoint_completion_target = 0.9\ncheckpoint_segments = 64\ncheckpoint_timeout = 30min\ndefault_text_search_config = pg_catalog.english\nexternal_pid_file = /var/run/postgresql/9.2-main.pid\nlc_messages = en_US.UTF-8\nlc_monetary = en_US.UTF-8\nlc_numeric = en_US.UTF-8\nlc_time = en_US.UTF-8\nlisten_addresses = *\nlog_checkpoints=on\nlog_destination=stderr\nlog_line_prefix = %t [%p]: [%l-1]\nlog_min_duration_statement =500\nmax_connections=300\nmax_stack_depth=2MB\nmax_wal_senders=5\nshared_buffers=4GB\nsynchronous_commit=off\nunix_socket_directory=/var/run/postgresql\nwal_keep_segments=128\nwal_level=hot_standby\nwork_mem=8MB\n\n\nThanks,\nDan\n\nHello, We upgraded from Ubuntu 11.04 to Ubuntu 12.04 and almost immediately obeserved increased CPU usage and significantly higher load average on our database server.At the time we were on Postgres 9.0.5. We decided to upgrade to Postgres 9.2 to see if that resolves the issue, but unfortunately it did not. Just for illustration purposes, below are a few links to cpu and load graphs pre and post upgrade. https://s3.amazonaws.com/iqtell.ops/Load+Average+Post+Upgrade.png https://s3.amazonaws.com/iqtell.ops/Load+Average+Pre+Upgrade.png https://s3.amazonaws.com/iqtell.ops/Server+CPU+Post+Upgrade.png https://s3.amazonaws.com/iqtell.ops/Server+CPU+Pre+Upgrade.png We also tried tweaking kernel parameters as mentioned here - http://www.postgresql.org/message-id/[email protected], but have not seen any improvement. Any advice on how to trace what could be causing the change in CPU usage and load average is appreciated. Our postgres version is: PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit OS: Linux ip-10-189-175-25 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Hardware (this an Amazon Ec2 High memory quadruple extra large instance): 8 core Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz68 GB RAMRAID10 with 8 drives using xfsDrives are EBS with provisioned IOPS, with 1000 iops each Postgres Configuration: archive_command = rsync -a %p slave:/var/lib/postgresql/replication_load/%farchive_mode = oncheckpoint_completion_target = 0.9checkpoint_segments = 64checkpoint_timeout = 30mindefault_text_search_config = pg_catalog.englishexternal_pid_file = /var/run/postgresql/9.2-main.pidlc_messages = en_US.UTF-8lc_monetary = en_US.UTF-8lc_numeric = en_US.UTF-8lc_time = en_US.UTF-8listen_addresses = *log_checkpoints=onlog_destination=stderrlog_line_prefix = %t [%p]: [%l-1]log_min_duration_statement =500max_connections=300max_stack_depth=2MBmax_wal_senders=5shared_buffers=4GBsynchronous_commit=offunix_socket_directory=/var/run/postgresqlwal_keep_segments=128wal_level=hot_standbywork_mem=8MB Thanks,Dan",
"msg_date": "Tue, 12 Feb 2013 12:25:48 -0500",
"msg_from": "Dan Kogan <[email protected]>",
"msg_from_op": true,
"msg_subject": "High CPU usage / load average after upgrading to Ubuntu 12.04"
},
{
"msg_contents": "Thanks for the reply. We are still using postgresql-9.0-801.jdbc4.jar. It seemed to us that this is more related to the OS than the JDBC, version as we had the issue before we upgraded to 9.2.\nIt might still be worth a try.\n\nJust out of curiosity, has anyone else experienced performance issues (or even tried) with the 9.0 jdbc driver against 9.2 server?\n\nDan\n\nFrom: Eric Haertel [mailto:[email protected]]\nSent: Tuesday, February 12, 2013 12:52 PM\nTo: Dan Kogan\nCc: [email protected]\nSubject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04\n\nI don't know if it helps, but I had after update from 8.4 to 9.1 extrem problems with my local test until I changed the JDBC driver to the propper version. I'm not shure if the load occured on the client or the server side as the local integration test run on my machine.\n\n2013/2/12 Dan Kogan <[email protected]<mailto:[email protected]>>\nHello,\n\nWe upgraded from Ubuntu 11.04 to Ubuntu 12.04 and almost immediately obeserved increased CPU usage and significantly higher load average on our database server.\nAt the time we were on Postgres 9.0.5. We decided to upgrade to Postgres 9.2 to see if that resolves the issue, but unfortunately it did not.\n\nJust for illustration purposes, below are a few links to cpu and load graphs pre and post upgrade.\n\nhttps://s3.amazonaws.com/iqtell.ops/Load+Average+Post+Upgrade.png\nhttps://s3.amazonaws.com/iqtell.ops/Load+Average+Pre+Upgrade.png\n\nhttps://s3.amazonaws.com/iqtell.ops/Server+CPU+Post+Upgrade.png\nhttps://s3.amazonaws.com/iqtell.ops/Server+CPU+Pre+Upgrade.png\n\nWe also tried tweaking kernel parameters as mentioned here - http://www.postgresql.org/message-id/[email protected], but have not seen any improvement.\n\n\nAny advice on how to trace what could be causing the change in CPU usage and load average is appreciated.\n\nOur postgres version is:\n\nPostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nOS:\n\nLinux ip-10-189-175-25 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n\nHardware (this an Amazon Ec2 High memory quadruple extra large instance):\n\n8 core Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz\n68 GB RAM\nRAID10 with 8 drives using xfs\nDrives are EBS with provisioned IOPS, with 1000 iops each\n\nPostgres Configuration:\n\narchive_command = rsync -a %p slave:/var/lib/postgresql/replication_load/%f\narchive_mode = on\ncheckpoint_completion_target = 0.9\ncheckpoint_segments = 64\ncheckpoint_timeout = 30min\ndefault_text_search_config = pg_catalog.english\nexternal_pid_file = /var/run/postgresql/9.2-main.pid\nlc_messages = en_US.UTF-8\nlc_monetary = en_US.UTF-8\nlc_numeric = en_US.UTF-8\nlc_time = en_US.UTF-8\nlisten_addresses = *\nlog_checkpoints=on\nlog_destination=stderr\nlog_line_prefix = %t [%p]: [%l-1]\nlog_min_duration_statement =500\nmax_connections=300\nmax_stack_depth=2MB\nmax_wal_senders=5\nshared_buffers=4GB\nsynchronous_commit=off\nunix_socket_directory=/var/run/postgresql\nwal_keep_segments=128\nwal_level=hot_standby\nwork_mem=8MB\n\n\nThanks,\nDan\n\n\n\n--\n\nEric Härtel\nSenior Software Developer\n\nTel.: +49 (0) 30 240 20 40 35\n\nMobil: +49 (0) 174 43 38 614\nEmail: [email protected]<mailto:[email protected]>\n\n\n\n[cid:[email protected]]\n\n\n\nGroupon GmbH & Co. Service KG | Oberwallstraße 6 | 10117 Berlin\npersönlich haftende Gesellschafterin: Groupon Verwaltungs GmbH, HRB 131594 B\nGeschäftsführer: Mark S. Hoyt | Bradley Downes | Daniel Köllner\nEingetragen beim Amtsgericht Charlottenburg Berlin, HRA 45265 B | USt.-ID Nr. DE 279 803 459",
"msg_date": "Tue, 12 Feb 2013 14:59:16 -0500",
"msg_from": "Dan Kogan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU usage / load average after upgrading to\n Ubuntu 12.04"
},
{
"msg_contents": "Hi Will,\n\nYes, I think we've seen some discussions on that. Our servers our hosted on Amazon Ec2 and upgrading the kernel does not seem so straight forward.\nWe did a benchmark using pgbench on 3.5 vs 3.2 and saw an improvement. Unfortunately our production server would not boot off 3.5 so we had to revert back to 3.2.\n\nAt this point we are contemplating whether it's better to go back to 11.04 or upgrade to 12.10 (which comes with kernel version 3.5).\nAny thoughts on that would be appreciated.\n\nDan\n\nFrom: Will Ferguson [mailto:[email protected]]\nSent: Tuesday, February 12, 2013 5:20 PM\nTo: Dan Kogan; [email protected]\nSubject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04\n\nHey Dan,\n\nIf I recall correctly there were some discussions on here related to performance issues with the 3.2 kernel. I'm away at the moment so can't dig them out but there have been much discussions lately about kernel performance in 3.2 which don't seem present in 3.4. I'll see if I can find them when I'm next at my desk.\n\nWill\n\n\nSent from Samsung Mobile\n\n\n\n-------- Original message --------\nFrom: Dan Kogan <[email protected]<mailto:[email protected]>>\nDate:\nTo: [email protected]<mailto:[email protected]>\nSubject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04\n\nThanks for the reply. We are still using postgresql-9.0-801.jdbc4.jar. It seemed to us that this is more related to the OS than the JDBC, version as we had the issue before we upgraded to 9.2.\nIt might still be worth a try.\n\nJust out of curiosity, has anyone else experienced performance issues (or even tried) with the 9.0 jdbc driver against 9.2 server?\n\nDan\n\nFrom: Eric Haertel [mailto:[email protected]]\nSent: Tuesday, February 12, 2013 12:52 PM\nTo: Dan Kogan\nCc: [email protected]<mailto:[email protected]>\nSubject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04\n\nI don't know if it helps, but I had after update from 8.4 to 9.1 extrem problems with my local test until I changed the JDBC driver to the propper version. I'm not shure if the load occured on the client or the server side as the local integration test run on my machine.\n\n2013/2/12 Dan Kogan <[email protected]<mailto:[email protected]>>\nHello,\n\nWe upgraded from Ubuntu 11.04 to Ubuntu 12.04 and almost immediately obeserved increased CPU usage and significantly higher load average on our database server.\nAt the time we were on Postgres 9.0.5. We decided to upgrade to Postgres 9.2 to see if that resolves the issue, but unfortunately it did not.\n\nJust for illustration purposes, below are a few links to cpu and load graphs pre and post upgrade.\n\nhttps://s3.amazonaws.com/iqtell.ops/Load+Average+Post+Upgrade.png\nhttps://s3.amazonaws.com/iqtell.ops/Load+Average+Pre+Upgrade.png\n\nhttps://s3.amazonaws.com/iqtell.ops/Server+CPU+Post+Upgrade.png\nhttps://s3.amazonaws.com/iqtell.ops/Server+CPU+Pre+Upgrade.png\n\nWe also tried tweaking kernel parameters as mentioned here - http://www.postgresql.org/message-id/[email protected], but have not seen any improvement.\n\n\nAny advice on how to trace what could be causing the change in CPU usage and load average is appreciated.\n\nOur postgres version is:\n\nPostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\nOS:\n\nLinux ip-10-189-175-25 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n\nHardware (this an Amazon Ec2 High memory quadruple extra large instance):\n\n8 core Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz\n68 GB RAM\nRAID10 with 8 drives using xfs\nDrives are EBS with provisioned IOPS, with 1000 iops each\n\nPostgres Configuration:\n\narchive_command = rsync -a %p slave:/var/lib/postgresql/replication_load/%f\narchive_mode = on\ncheckpoint_completion_target = 0.9\ncheckpoint_segments = 64\ncheckpoint_timeout = 30min\ndefault_text_search_config = pg_catalog.english\nexternal_pid_file = /var/run/postgresql/9.2-main.pid\nlc_messages = en_US.UTF-8\nlc_monetary = en_US.UTF-8\nlc_numeric = en_US.UTF-8\nlc_time = en_US.UTF-8\nlisten_addresses = *\nlog_checkpoints=on\nlog_destination=stderr\nlog_line_prefix = %t [%p]: [%l-1]\nlog_min_duration_statement =500\nmax_connections=300\nmax_stack_depth=2MB\nmax_wal_senders=5\nshared_buffers=4GB\nsynchronous_commit=off\nunix_socket_directory=/var/run/postgresql\nwal_keep_segments=128\nwal_level=hot_standby\nwork_mem=8MB\n\n\nThanks,\nDan\n\n\n\n--\n\nEric Härtel\nSenior Software Developer\n\nTel.: +49 (0) 30 240 20 40 35\n\nMobil: +49 (0) 174 43 38 614\nEmail: [email protected]<mailto:[email protected]>\n\n\n\n[cid:[email protected]]\n\n\n\nGroupon GmbH & Co. Service KG | Oberwallstraße 6 | 10117 Berlin\npersönlich haftende Gesellschafterin: Groupon Verwaltungs GmbH, HRB 131594 B\nGeschäftsführer: Mark S. Hoyt | Bradley Downes | Daniel Köllner\nEingetragen beim Amtsgericht Charlottenburg Berlin, HRA 45265 B | USt.-ID Nr. DE 279 803 459",
"msg_date": "Tue, 12 Feb 2013 20:28:41 -0500",
"msg_from": "Dan Kogan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU usage / load average after upgrading to\n Ubuntu 12.04"
},
{
"msg_contents": "On 02/12/2013 05:28 PM, Dan Kogan wrote:\n> Hi Will,\n> \n> Yes, I think we've seen some discussions on that. Our servers our hosted on Amazon Ec2 and upgrading the kernel does not seem so straight forward.\n> We did a benchmark using pgbench on 3.5 vs 3.2 and saw an improvement. Unfortunately our production server would not boot off 3.5 so we had to revert back to 3.2.\n> \n> At this point we are contemplating whether it's better to go back to 11.04 or upgrade to 12.10 (which comes with kernel version 3.5).\n> Any thoughts on that would be appreciated.\n\nI have a machine running the same version of Ubuntu. I'll run some\ntests and tell you what I find.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Feb 2013 11:24:41 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On 02/13/2013 11:24 AM, Josh Berkus wrote:\n> On 02/12/2013 05:28 PM, Dan Kogan wrote:\n>> Hi Will,\n>>\n>> Yes, I think we've seen some discussions on that. Our servers our hosted on Amazon Ec2 and upgrading the kernel does not seem so straight forward.\n>> We did a benchmark using pgbench on 3.5 vs 3.2 and saw an improvement. Unfortunately our production server would not boot off 3.5 so we had to revert back to 3.2.\n>>\n>> At this point we are contemplating whether it's better to go back to 11.04 or upgrade to 12.10 (which comes with kernel version 3.5).\n>> Any thoughts on that would be appreciated.\n> \n> I have a machine running the same version of Ubuntu. I'll run some\n> tests and tell you what I find.\n\nSo I'm running a pgbench. However, I don't really have anything to\ncompare the stats I'm seeing. CPU usage and load average was high (load\n7.9), but that was on -j 8 -c 32, with a TPS of 8500.\n\nWhat numbers are you seeing, exactly?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Feb 2013 16:25:52 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "Just to be clear - I was describing the current situation in our production.\r\n\r\nWe were running pgbench on different Ununtu versions today. I don’t have 12.04 setup at the moment, but I do have 12.10, which seems to be performing about the same as 12.04 in our tests with pgbench.\r\nRunning pgbench with 8 jobs and 32 clients resulted in load average of about 15 and TPS was 51350.\r\n\r\nQuestion - how many cores does your server have? Ours has 8 cores.\r\n\r\nThanks,\r\nDan \r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Josh Berkus\r\nSent: Wednesday, February 13, 2013 7:26 PM\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04\r\n\r\nOn 02/13/2013 11:24 AM, Josh Berkus wrote:\r\n> On 02/12/2013 05:28 PM, Dan Kogan wrote:\r\n>> Hi Will,\r\n>>\r\n>> Yes, I think we've seen some discussions on that. Our servers our hosted on Amazon Ec2 and upgrading the kernel does not seem so straight forward.\r\n>> We did a benchmark using pgbench on 3.5 vs 3.2 and saw an improvement. Unfortunately our production server would not boot off 3.5 so we had to revert back to 3.2.\r\n>>\r\n>> At this point we are contemplating whether it's better to go back to 11.04 or upgrade to 12.10 (which comes with kernel version 3.5).\r\n>> Any thoughts on that would be appreciated.\r\n> \r\n> I have a machine running the same version of Ubuntu. I'll run some \r\n> tests and tell you what I find.\r\n\r\nSo I'm running a pgbench. However, I don't really have anything to compare the stats I'm seeing. CPU usage and load average was high (load 7.9), but that was on -j 8 -c 32, with a TPS of 8500.\r\n\r\nWhat numbers are you seeing, exactly?\r\n\r\n--\r\nJosh Berkus\r\nPostgreSQL Experts Inc.\r\nhttp://pgexperts.com\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Feb 2013 20:30:38 -0500",
"msg_from": "Dan Kogan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU usage / load average after upgrading to\n Ubuntu 12.04"
},
{
"msg_contents": "On Tue, Feb 12, 2013 at 11:25 AM, Dan Kogan <[email protected]> wrote:\n> Hello,\n>\n>\n>\n> We upgraded from Ubuntu 11.04 to Ubuntu 12.04 and almost immediately\n> obeserved increased CPU usage and significantly higher load average on our\n> database server.\n>\n> At the time we were on Postgres 9.0.5. We decided to upgrade to Postgres\n> 9.2 to see if that resolves the issue, but unfortunately it did not.\n>\n>\n>\n> Just for illustration purposes, below are a few links to cpu and load graphs\n> pre and post upgrade.\n>\n>\n>\n> https://s3.amazonaws.com/iqtell.ops/Load+Average+Post+Upgrade.png\n>\n> https://s3.amazonaws.com/iqtell.ops/Load+Average+Pre+Upgrade.png\n>\n>\n>\n> https://s3.amazonaws.com/iqtell.ops/Server+CPU+Post+Upgrade.png\n>\n> https://s3.amazonaws.com/iqtell.ops/Server+CPU+Pre+Upgrade.png\n>\n>\n>\n> We also tried tweaking kernel parameters as mentioned here -\n> http://www.postgresql.org/message-id/[email protected], but\n> have not seen any improvement.\n>\n>\n>\n>\n>\n> Any advice on how to trace what could be causing the change in CPU usage and\n> load average is appreciated.\n>\n>\n>\n> Our postgres version is:\n>\n>\n>\n> PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro\n> 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>\n>\n>\n> OS:\n>\n>\n>\n> Linux ip-10-189-175-25 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03\n> UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n>\n>\n>\n> Hardware (this an Amazon Ec2 High memory quadruple extra large instance):\n>\n>\n>\n> 8 core Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz\n>\n> 68 GB RAM\n>\n> RAID10 with 8 drives using xfs\n>\n> Drives are EBS with provisioned IOPS, with 1000 iops each\n>\n>\n>\n> Postgres Configuration:\n>\n>\n>\n> archive_command = rsync -a %p slave:/var/lib/postgresql/replication_load/%f\n>\n> archive_mode = on\n>\n> checkpoint_completion_target = 0.9\n>\n> checkpoint_segments = 64\n>\n> checkpoint_timeout = 30min\n>\n> default_text_search_config = pg_catalog.english\n>\n> external_pid_file = /var/run/postgresql/9.2-main.pid\n>\n> lc_messages = en_US.UTF-8\n>\n> lc_monetary = en_US.UTF-8\n>\n> lc_numeric = en_US.UTF-8\n>\n> lc_time = en_US.UTF-8\n>\n> listen_addresses = *\n>\n> log_checkpoints=on\n>\n> log_destination=stderr\n>\n> log_line_prefix = %t [%p]: [%l-1]\n>\n> log_min_duration_statement =500\n>\n> max_connections=300\n>\n> max_stack_depth=2MB\n>\n> max_wal_senders=5\n>\n> shared_buffers=4GB\n>\n> synchronous_commit=off\n>\n> unix_socket_directory=/var/run/postgresql\n>\n> wal_keep_segments=128\n>\n> wal_level=hot_standby\n>\n> work_mem=8MB\n\ndoes your application have a lot of concurrency? history has shown\nthat postgres is highly sensitive to changes in the o/s scheduler\n(which changes a lot from release to release).\n\nalso check this:\nzone reclaim (http://frosty-postgres.blogspot.com/2012/08/postgresql-numa-and-zone-reclaim-mode.html)\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 08:07:48 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu 12.04"
},
{
"msg_contents": "Thanks for the info.\nOur application does have a lot of concurrency. We checked the zone reclaim parameter and it is turn off (that was the default, we did not have to change it).\n\nDan\n\n-----Original Message-----\nFrom: Merlin Moncure [mailto:[email protected]] \nSent: Thursday, February 14, 2013 9:08 AM\nTo: Dan Kogan\nCc: [email protected]\nSubject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04\n\nOn Tue, Feb 12, 2013 at 11:25 AM, Dan Kogan <[email protected]> wrote:\n> Hello,\n>\n>\n>\n> We upgraded from Ubuntu 11.04 to Ubuntu 12.04 and almost immediately \n> obeserved increased CPU usage and significantly higher load average on \n> our database server.\n>\n> At the time we were on Postgres 9.0.5. We decided to upgrade to \n> Postgres\n> 9.2 to see if that resolves the issue, but unfortunately it did not.\n>\n>\n>\n> Just for illustration purposes, below are a few links to cpu and load \n> graphs pre and post upgrade.\n>\n>\n>\n> https://s3.amazonaws.com/iqtell.ops/Load+Average+Post+Upgrade.png\n>\n> https://s3.amazonaws.com/iqtell.ops/Load+Average+Pre+Upgrade.png\n>\n>\n>\n> https://s3.amazonaws.com/iqtell.ops/Server+CPU+Post+Upgrade.png\n>\n> https://s3.amazonaws.com/iqtell.ops/Server+CPU+Pre+Upgrade.png\n>\n>\n>\n> We also tried tweaking kernel parameters as mentioned here - \n> http://www.postgresql.org/message-id/[email protected]\n> , but have not seen any improvement.\n>\n>\n>\n>\n>\n> Any advice on how to trace what could be causing the change in CPU \n> usage and load average is appreciated.\n>\n>\n>\n> Our postgres version is:\n>\n>\n>\n> PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc \n> (Ubuntu/Linaro\n> 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>\n>\n>\n> OS:\n>\n>\n>\n> Linux ip-10-189-175-25 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 \n> 15:48:03 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n>\n>\n>\n> Hardware (this an Amazon Ec2 High memory quadruple extra large instance):\n>\n>\n>\n> 8 core Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz\n>\n> 68 GB RAM\n>\n> RAID10 with 8 drives using xfs\n>\n> Drives are EBS with provisioned IOPS, with 1000 iops each\n>\n>\n>\n> Postgres Configuration:\n>\n>\n>\n> archive_command = rsync -a %p \n> slave:/var/lib/postgresql/replication_load/%f\n>\n> archive_mode = on\n>\n> checkpoint_completion_target = 0.9\n>\n> checkpoint_segments = 64\n>\n> checkpoint_timeout = 30min\n>\n> default_text_search_config = pg_catalog.english\n>\n> external_pid_file = /var/run/postgresql/9.2-main.pid\n>\n> lc_messages = en_US.UTF-8\n>\n> lc_monetary = en_US.UTF-8\n>\n> lc_numeric = en_US.UTF-8\n>\n> lc_time = en_US.UTF-8\n>\n> listen_addresses = *\n>\n> log_checkpoints=on\n>\n> log_destination=stderr\n>\n> log_line_prefix = %t [%p]: [%l-1]\n>\n> log_min_duration_statement =500\n>\n> max_connections=300\n>\n> max_stack_depth=2MB\n>\n> max_wal_senders=5\n>\n> shared_buffers=4GB\n>\n> synchronous_commit=off\n>\n> unix_socket_directory=/var/run/postgresql\n>\n> wal_keep_segments=128\n>\n> wal_level=hot_standby\n>\n> work_mem=8MB\n\ndoes your application have a lot of concurrency? history has shown that postgres is highly sensitive to changes in the o/s scheduler (which changes a lot from release to release).\n\nalso check this:\nzone reclaim (http://frosty-postgres.blogspot.com/2012/08/postgresql-numa-and-zone-reclaim-mode.html)\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 12:27:13 -0500",
"msg_from": "Dan Kogan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU usage / load average after upgrading to\n Ubuntu 12.04"
},
{
"msg_contents": "On 02/13/2013 05:30 PM, Dan Kogan wrote:\n> Just to be clear - I was describing the current situation in our production.\n> \n> We were running pgbench on different Ununtu versions today. I don’t have 12.04 setup at the moment, but I do have 12.10, which seems to be performing about the same as 12.04 in our tests with pgbench.\n> Running pgbench with 8 jobs and 32 clients resulted in load average of about 15 and TPS was 51350.\n\nWhat size database?\n\n> \n> Question - how many cores does your server have? Ours has 8 cores.\n\n32\n\nI suppose I could throw multiple pgbenches at it. I just dont' see the\nload numbers as unusual, but I don't have a similar pre-12.04 server to\ncompare with.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 10:38:09 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "We used scale factor of 3600. \r\nYeah, maybe other people see similar load average, we were not sure.\r\nHowever, we saw a clear difference right after the upgrade. \r\nWe are trying to determine whether it makes sense for us to go to 11.04 or maybe there is something here we are missing.\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Josh Berkus\r\nSent: Thursday, February 14, 2013 1:38 PM\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04\r\n\r\nOn 02/13/2013 05:30 PM, Dan Kogan wrote:\r\n> Just to be clear - I was describing the current situation in our production.\r\n> \r\n> We were running pgbench on different Ununtu versions today. I don’t have 12.04 setup at the moment, but I do have 12.10, which seems to be performing about the same as 12.04 in our tests with pgbench.\r\n> Running pgbench with 8 jobs and 32 clients resulted in load average of about 15 and TPS was 51350.\r\n\r\nWhat size database?\r\n\r\n> \r\n> Question - how many cores does your server have? Ours has 8 cores.\r\n\r\n32\r\n\r\nI suppose I could throw multiple pgbenches at it. I just dont' see the load numbers as unusual, but I don't have a similar pre-12.04 server to compare with.\r\n\r\n\r\n--\r\nJosh Berkus\r\nPostgreSQL Experts Inc.\r\nhttp://pgexperts.com\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 15:41:26 -0500",
"msg_from": "Dan Kogan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU usage / load average after upgrading to\n Ubuntu 12.04"
},
{
"msg_contents": "On 02/14/2013 12:41 PM, Dan Kogan wrote:\n> We used scale factor of 3600. \n> Yeah, maybe other people see similar load average, we were not sure.\n> However, we saw a clear difference right after the upgrade. \n> We are trying to determine whether it makes sense for us to go to 11.04 or maybe there is something here we are missing.\n\nWell, I'm seeing a higher system % on CPU than I expect (around 15% on\neach core), and a MUCH higher context-switch than I expect (up to 500K).\n Is that anything like you're seeing?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 15:57:50 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "Yes, we are seeing higher system % on the CPU, not sure how to quantify in terms of % right now - will check into that tomorrow.\r\nWe were not checking the context switch numbers during our benchmark, will check that tomorrow as well.\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Josh Berkus\r\nSent: Thursday, February 14, 2013 6:58 PM\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04\r\n\r\nOn 02/14/2013 12:41 PM, Dan Kogan wrote:\r\n> We used scale factor of 3600. \r\n> Yeah, maybe other people see similar load average, we were not sure.\r\n> However, we saw a clear difference right after the upgrade. \r\n> We are trying to determine whether it makes sense for us to go to 11.04 or maybe there is something here we are missing.\r\n\r\nWell, I'm seeing a higher system % on CPU than I expect (around 15% on each core), and a MUCH higher context-switch than I expect (up to 500K).\r\n Is that anything like you're seeing?\r\n\r\n--\r\nJosh Berkus\r\nPostgreSQL Experts Inc.\r\nhttp://pgexperts.com\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 23:32:15 -0500",
"msg_from": "Dan Kogan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High CPU usage / load average after upgrading to\n Ubuntu 12.04"
},
{
"msg_contents": "If you run your benchmarks for more than a few minutes I highly\nrecommend enabling sysstat service data collection, then you can look\nat it after the fact with sar. VERY useful stuff both for\nbenchmarking and post mortem on live servers.\n\nOn Thu, Feb 14, 2013 at 9:32 PM, Dan Kogan <[email protected]> wrote:\n> Yes, we are seeing higher system % on the CPU, not sure how to quantify in terms of % right now - will check into that tomorrow.\n> We were not checking the context switch numbers during our benchmark, will check that tomorrow as well.\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of Josh Berkus\n> Sent: Thursday, February 14, 2013 6:58 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04\n>\n> On 02/14/2013 12:41 PM, Dan Kogan wrote:\n>> We used scale factor of 3600.\n>> Yeah, maybe other people see similar load average, we were not sure.\n>> However, we saw a clear difference right after the upgrade.\n>> We are trying to determine whether it makes sense for us to go to 11.04 or maybe there is something here we are missing.\n>\n> Well, I'm seeing a higher system % on CPU than I expect (around 15% on each core), and a MUCH higher context-switch than I expect (up to 500K).\n> Is that anything like you're seeing?\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 21:47:45 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu 12.04"
},
{
"msg_contents": "On 02/14/2013 08:47 PM, Scott Marlowe wrote:\n> If you run your benchmarks for more than a few minutes I highly\n> recommend enabling sysstat service data collection, then you can look\n> at it after the fact with sar. VERY useful stuff both for\n> benchmarking and post mortem on live servers.\n\nWell, background sar, by default on Linux, only collects every 30min.\nFor a benchmark run, you want to generate your own sar file, for example:\n\nsar -o hddrun2.sar -A 10 90 &\n\nwhich says \"collect all stats every 10 seconds and write them to the\nfile hddrun2.sar for 15 minutes\"\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Feb 2013 10:26:08 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On Fri, Feb 15, 2013 at 11:26 AM, Josh Berkus <[email protected]> wrote:\n> On 02/14/2013 08:47 PM, Scott Marlowe wrote:\n>> If you run your benchmarks for more than a few minutes I highly\n>> recommend enabling sysstat service data collection, then you can look\n>> at it after the fact with sar. VERY useful stuff both for\n>> benchmarking and post mortem on live servers.\n>\n> Well, background sar, by default on Linux, only collects every 30min.\n> For a benchmark run, you want to generate your own sar file, for example:\n\nOn all my machines (debian and ubuntu) it collects every 5.\n\n> sar -o hddrun2.sar -A 10 90 &\n>\n> which says \"collect all stats every 10 seconds and write them to the\n> file hddrun2.sar for 15 minutes\"\n\nNot a bad idea. esp when benchmarking.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Feb 2013 11:52:34 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu 12.04"
},
{
"msg_contents": "So, our drop in performance is now clearly due to pathological OS\nbehavior during checkpoints. Still trying to pin down what's going on,\nbut it's not system load; it's clearly related to the IO system.\n\nAnyone else see this? I'm getting it both on 3.2 and 3.4. We're using\nLSI Megaraid.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Feb 2013 16:19:19 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "Scott,\n\n> So do you have generally slow IO, or is it fsync behavior etc?\n\nAll tests except pgBench show this system as superfast. Bonnie++ and DD\ntests are good (200 to 300mb/s), and test_fsync shows 14K/second.\nBasically it has no issues until checkpoint kicks in, at which time the\nentire system basically halts for the duration of the checkpoint.\n\nFor that matter, if I run a pgbench and halt it just before checkpoint\nkicks in, I get around 12000TPS, which is what I'd expect on this system.\n\nAt this point, we've tried 3.2.0.26, 3.2.0.27, 3.4.0, and tried updating\nthe RAID driver, and changing the IO scheduler. Nothing seems to affect\nthe behavior. Testing using Ext4 (instead of XFS) next.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Feb 2013 16:39:53 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On Mon, Feb 18, 2013 at 6:39 PM, Josh Berkus <[email protected]> wrote:\n> Scott,\n>\n>> So do you have generally slow IO, or is it fsync behavior etc?\n>\n> All tests except pgBench show this system as superfast. Bonnie++ and DD\n> tests are good (200 to 300mb/s), and test_fsync shows 14K/second.\n> Basically it has no issues until checkpoint kicks in, at which time the\n> entire system basically halts for the duration of the checkpoint.\n>\n> For that matter, if I run a pgbench and halt it just before checkpoint\n> kicks in, I get around 12000TPS, which is what I'd expect on this system.\n>\n> At this point, we've tried 3.2.0.26, 3.2.0.27, 3.4.0, and tried updating\n> the RAID driver, and changing the IO scheduler. Nothing seems to affect\n> the behavior. Testing using Ext4 (instead of XFS) next.\n\nDid you try turning barriers on or off *manually* (explicitly)? With\nLSI and barriers *on* and ext4 I had less-optimal performance. With\nLinux MD or (some) 3Ware configurations I had no performance hit.\n\n-- \nJon\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Feb 2013 18:46:52 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu 12.04"
},
{
"msg_contents": "\n> Did you try turning barriers on or off *manually* (explicitly)? With\n> LSI and barriers *on* and ext4 I had less-optimal performance. With\n> Linux MD or (some) 3Ware configurations I had no performance hit.\n\nThey're off in fstab.\n\n/dev/sdd1 on /data type xfs (rw,noatime,nodiratime,nobarrier)\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Feb 2013 16:51:50 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On Mon, Feb 18, 2013 at 5:39 PM, Josh Berkus <[email protected]> wrote:\n> Scott,\n>\n>> So do you have generally slow IO, or is it fsync behavior etc?\n>\n> All tests except pgBench show this system as superfast. Bonnie++ and DD\n> tests are good (200 to 300mb/s), and test_fsync shows 14K/second.\n> Basically it has no issues until checkpoint kicks in, at which time the\n> entire system basically halts for the duration of the checkpoint.\n\nI assume you've made attemtps at write levelling to reduce impacts of\ncheckpoints etc.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Feb 2013 20:41:26 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu 12.04"
},
{
"msg_contents": "On 19/02/13 13:39, Josh Berkus wrote:\n> Scott,\n>\n>> So do you have generally slow IO, or is it fsync behavior etc?\n> All tests except pgBench show this system as superfast. Bonnie++ and DD\n> tests are good (200 to 300mb/s), and test_fsync shows 14K/second.\n> Basically it has no issues until checkpoint kicks in, at which time the\n> entire system basically halts for the duration of the checkpoint.\n>\n> For that matter, if I run a pgbench and halt it just before checkpoint\n> kicks in, I get around 12000TPS, which is what I'd expect on this system.\n>\n> At this point, we've tried 3.2.0.26, 3.2.0.27, 3.4.0, and tried updating\n> the RAID driver, and changing the IO scheduler. Nothing seems to affect\n> the behavior. Testing using Ext4 (instead of XFS) next.\n>\n>\n\nMight be worth looking at your vm.dirty_ratio, vm.dirty_background_ratio \nand friends settings. We managed to choke up a system with 16x SSD by \nleaving them at their defaults...\n\nCheers\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 17:28:00 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On 02/18/2013 08:28 PM, Mark Kirkwood wrote:\n> Might be worth looking at your vm.dirty_ratio, vm.dirty_background_ratio\n> and friends settings. We managed to choke up a system with 16x SSD by\n> leaving them at their defaults...\n\nYeah? Any settings you'd recommend specifically? What did you use on\nthe SSD system?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 09:51:16 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On 20/02/13 06:51, Josh Berkus wrote:\n> On 02/18/2013 08:28 PM, Mark Kirkwood wrote:\n>> Might be worth looking at your vm.dirty_ratio, vm.dirty_background_ratio\n>> and friends settings. We managed to choke up a system with 16x SSD by\n>> leaving them at their defaults...\n> Yeah? Any settings you'd recommend specifically? What did you use on\n> the SSD system?\n>\n\nWe set:\n\nvm.dirty_background_ratio = 0\nvm.dirty_background_bytes = 1073741824\nvm.dirty_ratio = 0\nvm.dirty_bytes = 2147483648\n\ni.e 1G for dirty_background and 2G for dirty. We didn't spend much time \nafterwards fiddling with the size much. I'm guessing the we could have \nmade it bigger - however the SSD were happier to be constantly writing a \nfew G than being handed (say) 50G of buffers to write at once . The \nsystem has 512G of ram and 32 cores (no hyperthreading).\n\nregards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Feb 2013 12:17:22 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On 02/19/2013 09:51 AM, Josh Berkus wrote:\n> On 02/18/2013 08:28 PM, Mark Kirkwood wrote:\n>> Might be worth looking at your vm.dirty_ratio, vm.dirty_background_ratio\n>> and friends settings. We managed to choke up a system with 16x SSD by\n>> leaving them at their defaults...\n> \n> Yeah? Any settings you'd recommend specifically? What did you use on\n> the SSD system?\n> \n\nNM, I tested lowering dirty_background_ratio, and it didn't help,\nbecause checkpoints are kicking in before pdflush ever gets there.\n\nSo the issue seems to be that if you have this combination of factors:\n\n1. large RAM\n2. many/fast CPUs\n3. a database which fits in RAM but is larger than the RAID controller's\nWB cache\n4. pg_xlog on the same volume as pgdata\n\n... then you'll see checkpoint \"stalls\" and spread checkpoint will\nactually make them worse by making the stalls longer.\n\nMoving pg_xlog to a separate partition makes this better. Making\nbgwriter more aggressive helps a bit more on top of that.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 15:24:31 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On 20/02/13 12:24, Josh Berkus wrote:\n>\n> NM, I tested lowering dirty_background_ratio, and it didn't help,\n> because checkpoints are kicking in before pdflush ever gets there.\n>\n> So the issue seems to be that if you have this combination of factors:\n>\n> 1. large RAM\n> 2. many/fast CPUs\n> 3. a database which fits in RAM but is larger than the RAID controller's\n> WB cache\n> 4. pg_xlog on the same volume as pgdata\n>\n> ... then you'll see checkpoint \"stalls\" and spread checkpoint will\n> actually make them worse by making the stalls longer.\n>\n> Moving pg_xlog to a separate partition makes this better. Making\n> bgwriter more aggressive helps a bit more on top of that.\n>\n\nWe have pg_xlog on a pair of PCIe SSD. Also we running the deadline io \nscheduler.\n\nRegards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Feb 2013 12:37:38 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On Tue, Feb 19, 2013 at 4:24 PM, Josh Berkus <[email protected]> wrote:\n> ... then you'll see checkpoint \"stalls\" and spread checkpoint will\n> actually make them worse by making the stalls longer.\n\nWait, if they're spread enough then there won't be a checkpoint, so to\nspeak. Are you saying that spreading them out means that they still\nkind of pile up, even with say a completion target of 1.0 etc?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 20:15:23 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu 12.04"
},
{
"msg_contents": "On 02/19/2013 07:15 PM, Scott Marlowe wrote:\n> On Tue, Feb 19, 2013 at 4:24 PM, Josh Berkus <[email protected]> wrote:\n>> ... then you'll see checkpoint \"stalls\" and spread checkpoint will\n>> actually make them worse by making the stalls longer.\n> \n> Wait, if they're spread enough then there won't be a checkpoint, so to\n> speak. Are you saying that spreading them out means that they still\n> kind of pile up, even with say a completion target of 1.0 etc?\n\nI'm saying that spreading them makes things worse, because they get\nintermixed with the fsyncs for the WAL and causes commits to stall. I\ntried setting checkpoint_completion_target = 0.0 and throughput got\nabout 10% better.\n\nI'm beginning to think that checkpoint_completion_target should be 0.0,\nby default.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Feb 2013 14:44:34 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On Wed, Feb 20, 2013 at 3:44 PM, Josh Berkus <[email protected]> wrote:\n> On 02/19/2013 07:15 PM, Scott Marlowe wrote:\n>> On Tue, Feb 19, 2013 at 4:24 PM, Josh Berkus <[email protected]> wrote:\n>>> ... then you'll see checkpoint \"stalls\" and spread checkpoint will\n>>> actually make them worse by making the stalls longer.\n>>\n>> Wait, if they're spread enough then there won't be a checkpoint, so to\n>> speak. Are you saying that spreading them out means that they still\n>> kind of pile up, even with say a completion target of 1.0 etc?\n>\n> I'm saying that spreading them makes things worse, because they get\n> intermixed with the fsyncs for the WAL and causes commits to stall. I\n> tried setting checkpoint_completion_target = 0.0 and throughput got\n> about 10% better.\n\nSounds to me like your IO system is stalling on fsyncs or something\nlike that. On machines with plenty of IO cranking up completion\ntarget usuall smooths things out. I've got some new big servers\ncoming in at work over the next few months so I'm gonna test and\ncompare Ubuntu 10.04 and 12.04 and see if I can see this behaviour.\nWe have a 12.04 machine in production but honestly it's not working\nvery hard right now. But it's in production so I can't benchmark it\nwithout causing problems.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Feb 2013 18:15:26 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu 12.04"
},
{
"msg_contents": "\n> Sounds to me like your IO system is stalling on fsyncs or something\n> like that. On machines with plenty of IO cranking up completion\n> target usuall smooths things out. \n\nIt certainly seems like it does. However, I can't demonstrate the issue\nusing any simpler tool than pgbench ... even running four test_fsyncs in\nparallel didn't show any issues, nor do standard FS testing tools.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Feb 2013 19:14:10 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "> From: Josh Berkus <[email protected]>\n>To: Scott Marlowe <[email protected]> \n>Cc: [email protected] \n>Sent: Thursday, 21 February 2013, 3:14\n>Subject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04\n> \n>\n>> Sounds to me like your IO system is stalling on fsyncs or something\n>> like that. On machines with plenty of IO cranking up completion\n>> target usuall smooths things out. \n>\n>It certainly seems like it does. However, I can't demonstrate the issue\n>using any simpler tool than pgbench ... even running four test_fsyncs in\n>parallel didn't show any issues, nor do standard FS testing tools.\n>\n\n\nI've missed a load of this thread and just scanned through what I can see, so apologies if I'm repeating anything.\n\nIf the suspicion is the IO system and you've tuned everything you can think of; is there anything interesting in meminfo/iostat/vmstat before/during the stalls? If so can you cause anything similar via bonnie++ with the \"-b\" option?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Feb 2013 11:24:21 +0000 (GMT)",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu 12.04"
},
{
"msg_contents": "On 02/20/13 19:14, Josh Berkus wrote:\n>> Sounds to me like your IO system is stalling on fsyncs or something\n>> like that. On machines with plenty of IO cranking up completion\n>> target usuall smooths things out. \n> It certainly seems like it does. However, I can't demonstrate the issue\n> using any simpler tool than pgbench ... even running four test_fsyncs in\n> parallel didn't show any issues, nor do standard FS testing tools.\n>\n\nWe were really starting to think that the system had an IO problem that we\ncouldn't tickle with any synthetic tools. Then one of our other customers who\nupgraded to Ubuntu 12.04 LTS and is also experiencing issues came across the\nfollowing LKML thread regarding pdflush on 3.0+ kernels:\n\nhttps://lkml.org/lkml/2012/10/9/210\n\nSo, I went and built a couple custom kernels with this patch removed:\n\nhttps://patchwork.kernel.org/patch/825212/\n\nand the bad behavior stopped. Best performance was with a 3.5 kernel with\nthe patch removed.\n\n\n\n-- \nJeff Frost <[email protected]>\nCTO, PostgreSQL Experts, Inc.\nPhone: 1-888-PG-EXPRT x506\nFAX: 415-762-5122\nhttp://www.pgexperts.com/ \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Feb 2013 14:30:00 -0800",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu\n 12.04"
},
{
"msg_contents": "On Fri, Feb 15, 2013 at 10:52 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Fri, Feb 15, 2013 at 11:26 AM, Josh Berkus <[email protected]> wrote:\n> > On 02/14/2013 08:47 PM, Scott Marlowe wrote:\n> >> If you run your benchmarks for more than a few minutes I highly\n> >> recommend enabling sysstat service data collection, then you can look\n> >> at it after the fact with sar. VERY useful stuff both for\n> >> benchmarking and post mortem on live servers.\n> >\n> > Well, background sar, by default on Linux, only collects every 30min.\n> > For a benchmark run, you want to generate your own sar file, for example:\n>\n> On all my machines (debian and ubuntu) it collects every 5.\n>\n\nAll of mine were 10, but once I figured out to edit /etc/cron.d/sysstat\nthey are now every 1 minute.\n\nsar has some remarkably opaque documentation, but I'm glad I tracked that\ndown.\n\nCheers,\n\nJeff\n\nOn Fri, Feb 15, 2013 at 10:52 AM, Scott Marlowe <[email protected]> wrote:\nOn Fri, Feb 15, 2013 at 11:26 AM, Josh Berkus <[email protected]> wrote:\n> On 02/14/2013 08:47 PM, Scott Marlowe wrote:\n>> If you run your benchmarks for more than a few minutes I highly\n>> recommend enabling sysstat service data collection, then you can look\n>> at it after the fact with sar. VERY useful stuff both for\n>> benchmarking and post mortem on live servers.\n>\n> Well, background sar, by default on Linux, only collects every 30min.\n> For a benchmark run, you want to generate your own sar file, for example:\n\nOn all my machines (debian and ubuntu) it collects every 5.All of mine were 10, but once I figured out to edit /etc/cron.d/sysstat they are now every 1 minute.sar has some remarkably opaque documentation, but I'm glad I tracked that down.\nCheers,Jeff",
"msg_date": "Tue, 26 Feb 2013 13:30:42 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu 12.04"
},
{
"msg_contents": "On Tue, Feb 26, 2013 at 2:30 PM, Jeff Janes <[email protected]> wrote:\n> On Fri, Feb 15, 2013 at 10:52 AM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Fri, Feb 15, 2013 at 11:26 AM, Josh Berkus <[email protected]> wrote:\n>> > On 02/14/2013 08:47 PM, Scott Marlowe wrote:\n>> >> If you run your benchmarks for more than a few minutes I highly\n>> >> recommend enabling sysstat service data collection, then you can look\n>> >> at it after the fact with sar. VERY useful stuff both for\n>> >> benchmarking and post mortem on live servers.\n>> >\n>> > Well, background sar, by default on Linux, only collects every 30min.\n>> > For a benchmark run, you want to generate your own sar file, for\n>> > example:\n>>\n>> On all my machines (debian and ubuntu) it collects every 5.\n>\n>\n> All of mine were 10, but once I figured out to edit /etc/cron.d/sysstat they\n> are now every 1 minute.\n\noh yeah it's every 10 on the 5s. I too need to go to 1minute intervals.\n\n> sar has some remarkably opaque documentation, but I'm glad I tracked that\n> down.\n\nIt's so incredibly useful. When a machine is acting up often getting\nit back online is more important than fixing it right then, and most\nof the system state stuff is lost on reboot / fix.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Feb 2013 14:46:32 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage / load average after upgrading to Ubuntu 12.04"
}
] |
[
{
"msg_contents": "Hello,\n\ntweaking our connection pooler (*pgbouncer*) and *posgres 8.4 *seemed a bit\nconfusing to us.\n\nWhich key should i set for changing the limit of connection between *pgbouncer\n<->posgres8.4?*\n\ni think configurations such as *max_client_conn, default_pool_size, **\ndefault_pool_size* etc of *pgbouncer* are related to limit of connections\nbetween *application <-> pgbounce*r\nI want to increase the connections of pgbouncer opened on posgres (*\npgbouncer<->posgres*) to eliminate waiting connections in the pool, to\nconsume queries faster.\n\nany ideas about that?\n\nBest regards,\nYetkin Ozturk\n\nHello,tweaking our connection pooler (pgbouncer) and posgres 8.4 seemed a bit confusing to us.Which key should i set for changing the limit of connection between pgbouncer <->posgres8.4?\ni think configurations such as max_client_conn, default_pool_size, default_pool_size etc of pgbouncer are related to limit of connections between application <-> pgbouncer \nI want to increase the connections of pgbouncer opened on posgres (pgbouncer<->posgres) to eliminate waiting connections in the pool, to consume queries faster.\nany ideas about that?\nBest regards,Yetkin Ozturk",
"msg_date": "Wed, 13 Feb 2013 15:47:55 +0200",
"msg_from": "=?ISO-8859-1?Q?Yetkin_=D6zt=FCrk?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "connection poolers' db connections"
},
{
"msg_contents": "Yetkin Öztürk <[email protected]> wrote:\n\n> Which key should i set for changing the limit of connection\n> between pgbouncer <-> posgres8.4?\n\nThe options which end in _pool_size. max_client_conn specifies how\nmany application connections can be made to pgbouncer.\n\nIt's generally best to use pool_mode of transaction if you can.\n\n> I want to increase the connections of pgbouncer opened on posgres\n> (pgbouncer<->posgres) to eliminate waiting connections in the\n> pool, to consume queries faster.\n\nBe sure to read this:\n\nhttp://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n\nBeyond a certain point, starting the query sooner will cause it to\nfinish later. Really.\n\n-- \nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Feb 2013 06:56:54 -0800 (PST)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: connection poolers' db connections"
},
{
"msg_contents": "Hi Kevin,\n\nthanks for responding. I understand its a world of bottlenecks, different\ncombinations of producer-consumer problems,\n\nBeyond a certain point, starting the query sooner will cause it to\n> finish later. Really.\n\n\nbut I think in our situation our Postgres server can consume more\nconnections, at least i'll give a try.\n\nYetkin.\n\n\n2013/2/13 Kevin Grittner <[email protected]>\n\n> Yetkin Öztürk <[email protected]> wrote:\n>\n> > Which key should i set for changing the limit of connection\n> > between pgbouncer <-> posgres8.4?\n>\n> The options which end in _pool_size. max_client_conn specifies how\n> many application connections can be made to pgbouncer.\n>\n> It's generally best to use pool_mode of transaction if you can.\n>\n> > I want to increase the connections of pgbouncer opened on posgres\n> > (pgbouncer<->posgres) to eliminate waiting connections in the\n> > pool, to consume queries faster.\n>\n> Be sure to read this:\n>\n> http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n>\n> Beyond a certain point, starting the query sooner will cause it to\n> finish later. Really.\n>\n\n\n\n>\n> --\n> Kevin Grittner\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi Kevin,thanks for responding. I understand its a world of bottlenecks, different combinations of producer-consumer problems, \nBeyond a certain point, starting the query sooner will cause it tofinish later. Really.but I think in our situation our Postgres server can consume more connections, at least i'll give a try.\nYetkin.2013/2/13 Kevin Grittner <[email protected]>\nYetkin Öztürk <[email protected]> wrote:\n\n> Which key should i set for changing the limit of connection\n> between pgbouncer <-> posgres8.4?\n\nThe options which end in _pool_size. max_client_conn specifies how\nmany application connections can be made to pgbouncer.\n\nIt's generally best to use pool_mode of transaction if you can.\n\n> I want to increase the connections of pgbouncer opened on posgres\n> (pgbouncer<->posgres) to eliminate waiting connections in the\n> pool, to consume queries faster.\n\nBe sure to read this:\n\nhttp://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n\nBeyond a certain point, starting the query sooner will cause it to\nfinish later. Really. \n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 13 Feb 2013 17:35:40 +0200",
"msg_from": "=?ISO-8859-1?Q?Yetkin_=D6zt=FCrk?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: connection poolers' db connections"
}
] |
[
{
"msg_contents": "Hi everybody!\nI'm new in mailing list, and i have a little question.\n\n\nThe tables are:\npostalcodes (place_id, code), PK(place_id, code) 600K of rws\nplaces (id, name), PK(id), INDEX(name) 3M of rows\n\nI've to insert another 600k of rows into postalcodes table, in a single \ntransaction, omitting duplicates.\n\nThe insert query is a prepared statement like this:\n\nINSERT INTO postalcodes (place_id, code)\nSELECT places.id, :code\nFROM places\nLEFT JOIN postalcodes (postalcodes.place_id = places.id and \npostalcodes.code = :code)\nWHERE places.name = :name AND postalcodes.place_id IS NULL\n\nInserting rows works well (3000 queries per second), but when i reach \n30K of executed statements, the insert rate slows down to 500/1000 \nqueries per second).\n\nDoing a commit every 20K of inserts, the insert rate remain 3000 queries \nper second.\n\nThere is a limit of inserts in a transaction?\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 11:28:37 +0100",
"msg_from": "Asmir Mustafic <[email protected]>",
"msg_from_op": true,
"msg_subject": "700K Inserts in transaction"
},
{
"msg_contents": "Are the duplicates evenly distributed? You might have started on a big chunk of dupes.\n\nI'd go about this by loading my new data in a new table, removing the dupes, then inserting all the new data into the old table. That way you have more granular information about the process. And you can do the initial load with copy if you need it. And you can remove the dupes outside of a transaction. \n\nNik\n\nSent from my iPhone\n\nOn Feb 14, 2013, at 5:28 AM, Asmir Mustafic <[email protected]> wrote:\n\n> Hi everybody!\n> I'm new in mailing list, and i have a little question.\n> \n> \n> The tables are:\n> postalcodes (place_id, code), PK(place_id, code) 600K of rws\n> places (id, name), PK(id), INDEX(name) 3M of rows\n> \n> I've to insert another 600k of rows into postalcodes table, in a single transaction, omitting duplicates.\n> \n> The insert query is a prepared statement like this:\n> \n> INSERT INTO postalcodes (place_id, code)\n> SELECT places.id, :code\n> FROM places\n> LEFT JOIN postalcodes (postalcodes.place_id = places.id and postalcodes.code = :code)\n> WHERE places.name = :name AND postalcodes.place_id IS NULL\n> \n> Inserting rows works well (3000 queries per second), but when i reach 30K of executed statements, the insert rate slows down to 500/1000 queries per second).\n> \n> Doing a commit every 20K of inserts, the insert rate remain 3000 queries per second.\n> \n> There is a limit of inserts in a transaction?\n> \n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 06:51:49 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: 700K Inserts in transaction"
}
] |
[
{
"msg_contents": "My postgres db ran out of space. I have 27028 files in the pg_xlog\ndirectory. I'm unclear what happened this has been running flawless for\nyears. I do have archiving turned on and run an archive command every 10\nminutes.\n\nI'm not sure how to go about cleaning this up, I got the DB back up, but\nI've only got 6gb free on this drive and it's going to blow up, if I can't\nrelieve some of the stress from this directory over 220gb.\n\nWhat are my options?\n\nThanks\n\nPostgres 9.1.6\nslon 2.1.2\n\nTory\n\nMy postgres db ran out of space. I have 27028 files in the pg_xlog directory. I'm unclear what happened this has been running flawless for years. I do have archiving turned on and run an archive command every 10 minutes. \nI'm not sure how to go about cleaning this up, I got the DB back up, but I've only got 6gb free on this drive and it's going to blow up, if I can't relieve some of the stress from this directory over 220gb.\nWhat are my options?ThanksPostgres 9.1.6slon 2.1.2Tory",
"msg_date": "Thu, 14 Feb 2013 02:49:16 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG_XLOG 27028 files running out of space"
},
{
"msg_contents": "2013/2/14 Tory M Blue <[email protected]>\n\n> My postgres db ran out of space. I have 27028 files in the pg_xlog\n> directory. I'm unclear what happened this has been running flawless for\n> years. I do have archiving turned on and run an archive command every 10\n> minutes.\n>\n> I'm not sure how to go about cleaning this up, I got the DB back up, but\n> I've only got 6gb free on this drive and it's going to blow up, if I can't\n> relieve some of the stress from this directory over 220gb.\n>\n> What are my options?\n>\n> Thanks\n>\n> Postgres 9.1.6\n> slon 2.1.2\n\n\nI can't give any advice right now, but I'd suggest posting more details of\nyour\nsetup, including as much of your postgresql.conf file as possible\n (especially\nthe checkpoint_* and archive_* settings) and also the output of\npg_controldata.\n\nIan Barwick\n\n2013/2/14 Tory M Blue <[email protected]>\nMy postgres db ran out of space. I have 27028 files in the pg_xlog directory. I'm unclear what happened this has been running flawless for years. I do have archiving turned on and run an archive command every 10 minutes. \nI'm not sure how to go about cleaning this up, I got the DB back up, but I've only got 6gb free on this drive and it's going to blow up, if I can't relieve some of the stress from this directory over 220gb.\nWhat are my options?ThanksPostgres 9.1.6slon 2.1.2I can't give any advice right now, but I'd suggest posting more details of yoursetup, including as much of your postgresql.conf file as possible (especially\nthe checkpoint_* and archive_* settings) and also the output of pg_controldata.Ian Barwick",
"msg_date": "Thu, 14 Feb 2013 20:01:36 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG_XLOG 27028 files running out of space"
},
{
"msg_contents": "On Thu, Feb 14, 2013 at 3:01 AM, Ian Lawrence Barwick <[email protected]>wrote:\n\n> 2013/2/14 Tory M Blue <[email protected]>\n>\n>> My postgres db ran out of space. I have 27028 files in the pg_xlog\n>> directory. I'm unclear what happened this has been running flawless for\n>> years. I do have archiving turned on and run an archive command every 10\n>> minutes.\n>>\n>> I'm not sure how to go about cleaning this up, I got the DB back up, but\n>> I've only got 6gb free on this drive and it's going to blow up, if I can't\n>> relieve some of the stress from this directory over 220gb.\n>>\n>> What are my options?\n>>\n>> Thanks\n>>\n>> Postgres 9.1.6\n>> slon 2.1.2\n>\n>\n> I can't give any advice right now, but I'd suggest posting more details of\n> your\n> setup, including as much of your postgresql.conf file as possible\n> (especially\n> the checkpoint_* and archive_* settings) and also the output of\n> pg_controldata.\n>\n> Ian Barwick\n>\n\nThanks Ian\n\nI figured it out and figured out a way around it for now.\n\nMy archive destination had it's ownership changed and thus the archive\ncommand could not write to the directory. I didn't catch this until well it\nwas too late. So 225GB, 27000 files later.\n\nI found a few writeups on how to clear this up and use the command true in\nthe archive command to quickly and easily delete a bunch of wal files from\nthe pg_xlog directory in short order. So that worked and now since I know\nwhat the cause was, I should be able to restore my pg_archive PITR configs\nand be good to go.\n\nThis is definitely one of those bullets I would rather not of taken, but\nthe damage appears to be minimal (thank you postgres)\n\nThanks again\nTory\n\nOn Thu, Feb 14, 2013 at 3:01 AM, Ian Lawrence Barwick <[email protected]> wrote:\n2013/2/14 Tory M Blue <[email protected]>\n\nMy postgres db ran out of space. I have 27028 files in the pg_xlog directory. I'm unclear what happened this has been running flawless for years. I do have archiving turned on and run an archive command every 10 minutes. \nI'm not sure how to go about cleaning this up, I got the DB back up, but I've only got 6gb free on this drive and it's going to blow up, if I can't relieve some of the stress from this directory over 220gb.\nWhat are my options?ThanksPostgres 9.1.6slon 2.1.2I can't give any advice right now, but I'd suggest posting more details of yoursetup, including as much of your postgresql.conf file as possible (especially\nthe checkpoint_* and archive_* settings) and also the output of pg_controldata.Ian Barwick\nThanks IanI figured it out and figured out a way around it for now. My archive destination had it's ownership changed and thus the archive command could not write to the directory. I didn't catch this until well it was too late. So 225GB, 27000 files later.\nI found a few writeups on how to clear this up and use the command true in the archive command to quickly and easily delete a bunch of wal files from the pg_xlog directory in short order. So that worked and now since I know what the cause was, I should be able to restore my pg_archive PITR configs and be good to go.\nThis is definitely one of those bullets I would rather not of taken, but the damage appears to be minimal (thank you postgres)Thanks againTory",
"msg_date": "Thu, 14 Feb 2013 03:06:45 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG_XLOG 27028 files running out of space"
},
{
"msg_contents": "On 14.02.2013 12:49, Tory M Blue wrote:\n> My postgres db ran out of space. I have 27028 files in the pg_xlog\n> directory. I'm unclear what happened this has been running flawless for\n> years. I do have archiving turned on and run an archive command every 10\n> minutes.\n>\n> I'm not sure how to go about cleaning this up, I got the DB back up, but\n> I've only got 6gb free on this drive and it's going to blow up, if I can't\n> relieve some of the stress from this directory over 220gb.\n>\n> What are my options?\n\nYou'll need to delete some of the oldest xlog files to release disk \nspace. But first you need to make sure you don't delete any files that \nare still needed, and what got you into this situation in the first place.\n\nYou say that you \"run an archive command every 10 minutes\". What do you \nmean by that? archive_command specified in postgresql.conf is executed \nautomatically by the system, so you don't need to and should not run \nthat manually. After archive_command has run successfully, and the \nsystem doesn't need the WAL file for recovery anymore (ie. after the \nnext checkpoint), the system will delete the archived file to release \ndisk space. Clearly that hasn't been working in your system for some \nreason. If archive_command doesn't succeed, ie. it returns a non-zero \nreturn code, the system will keep retrying forever until it succeeds, \nwithout deleting the file. Have you checked the logs for any \narchive_command errors?\n\nTo get out of the immediate trouble, run \"pg_controldata\", and make note \nof this line:\n\nLatest checkpoint's REDO WAL file: 000000010000000000000001\n\nAnything older than that file is not needed for recovery. You can delete \nthose, if you have them safely archived.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 13:08:58 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG_XLOG 27028 files running out of space"
},
{
"msg_contents": "Tory M Blue wrote:\n> My postgres db ran out of space. I have 27028 files in the pg_xlog directory. I'm unclear what\n> happened this has been running flawless for years. I do have archiving turned on and run an archive\n> command every 10 minutes.\n> \n> I'm not sure how to go about cleaning this up, I got the DB back up, but I've only got 6gb free on\n> this drive and it's going to blow up, if I can't relieve some of the stress from this directory over\n> 220gb.\n\n> Postgres 9.1.6\n> slon 2.1.2\n\nAre there any messages in the log file?\nAre you sure that archiving works, i.e. do WAL files\nshow up in your archive location?\n\nThe most likely explanation for what you observe is that\narchive_command returns a non-zero result (fails).\nThat would lead to a message in the log.\n\nYours,\nLaurenz Albe\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 11:09:25 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG_XLOG 27028 files running out of space"
},
{
"msg_contents": "On Thu, Feb 14, 2013 at 3:08 AM, Heikki Linnakangas <[email protected]\n> wrote:\n\n> On 14.02.2013 12:49, Tory M Blue wrote:\n>\n>> My postgres db ran out of space. I have 27028 files in the pg_xlog\n>> directory. I'm unclear what happened this has been running flawless for\n>> years. I do have archiving turned on and run an archive command every 10\n>> minutes.\n>>\n>> I'm not sure how to go about cleaning this up, I got the DB back up, but\n>> I've only got 6gb free on this drive and it's going to blow up, if I can't\n>> relieve some of the stress from this directory over 220gb.\n>>\n>> What are my options?\n>>\n>\n> You'll need to delete some of the oldest xlog files to release disk space.\n> But first you need to make sure you don't delete any files that are still\n> needed, and what got you into this situation in the first place.\n>\n> You say that you \"run an archive command every 10 minutes\". What do you\n> mean by that? archive_command specified in postgresql.conf is executed\n> automatically by the system, so you don't need to and should not run that\n> manually. After archive_command has run successfully, and the system\n> doesn't need the WAL file for recovery anymore (ie. after the next\n> checkpoint), the system will delete the archived file to release disk\n> space. Clearly that hasn't been working in your system for some reason. If\n> archive_command doesn't succeed, ie. it returns a non-zero return code, the\n> system will keep retrying forever until it succeeds, without deleting the\n> file. Have you checked the logs for any archive_command errors?\n>\n> To get out of the immediate trouble, run \"pg_controldata\", and make note\n> of this line:\n>\n> Latest checkpoint's REDO WAL file: 000000010000000000000001\n>\n> Anything older than that file is not needed for recovery. You can delete\n> those, if you have them safely archived.\n>\n> - Heikki\n>\n\nThanks Heikki,\n\nYes I misspoke with the archive command, sorry, that was a timeout and in\nmy haste/disorientation I misread/spoke. So I'm clear on that.\n\nI'm also over my issue after discovering the problem, but pg_controldata is\nsomething I could of used initially in my panic, so I've added that command\nto my toolbox and appreciate the response!\n\nThanks\nTory\n\nOn Thu, Feb 14, 2013 at 3:08 AM, Heikki Linnakangas <[email protected]> wrote:\nOn 14.02.2013 12:49, Tory M Blue wrote:\n\nMy postgres db ran out of space. I have 27028 files in the pg_xlog\ndirectory. I'm unclear what happened this has been running flawless for\nyears. I do have archiving turned on and run an archive command every 10\nminutes.\n\nI'm not sure how to go about cleaning this up, I got the DB back up, but\nI've only got 6gb free on this drive and it's going to blow up, if I can't\nrelieve some of the stress from this directory over 220gb.\n\nWhat are my options?\n\n\nYou'll need to delete some of the oldest xlog files to release disk space. But first you need to make sure you don't delete any files that are still needed, and what got you into this situation in the first place.\n\nYou say that you \"run an archive command every 10 minutes\". What do you mean by that? archive_command specified in postgresql.conf is executed automatically by the system, so you don't need to and should not run that manually. After archive_command has run successfully, and the system doesn't need the WAL file for recovery anymore (ie. after the next checkpoint), the system will delete the archived file to release disk space. Clearly that hasn't been working in your system for some reason. If archive_command doesn't succeed, ie. it returns a non-zero return code, the system will keep retrying forever until it succeeds, without deleting the file. Have you checked the logs for any archive_command errors?\n\nTo get out of the immediate trouble, run \"pg_controldata\", and make note of this line:\n\nLatest checkpoint's REDO WAL file: 000000010000000000000001\n\nAnything older than that file is not needed for recovery. You can delete those, if you have them safely archived.\n\n- HeikkiThanks Heikki,Yes I misspoke with the archive command, sorry, that was a timeout and in my haste/disorientation I misread/spoke. So I'm clear on that.I'm also over my issue after discovering the problem, but pg_controldata is something I could of used initially in my panic, so I've added that command to my toolbox and appreciate the response!\nThanksTory",
"msg_date": "Thu, 14 Feb 2013 03:11:05 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG_XLOG 27028 files running out of space"
}
] |
[
{
"msg_contents": "Hello,\n\nI've been struggling to understand what's happening on my \ndatabases/query for several days, and I'm turning to higher mind for a \nlogical answer.\n\nI'm dealing with a fairly large database, containing logs informations, \nthat I crunch to get data out of it, with several indexes on them that I \nhoped were logical\n\n\\d ruddersysevents\n Table « public.ruddersysevents »\n Colonne | Type | \nModificateurs\n--------------------+--------------------------+--------------------------------------------------\n id | integer | non NULL Par défaut, \nnextval('serial'::regclass)\n executiondate | timestamp with time zone | non NULL\n nodeid | text | non NULL\n directiveid | text | non NULL\n ruleid | text | non NULL\n serial | integer | non NULL\n component | text | non NULL\n keyvalue | text |\n executiontimestamp | timestamp with time zone | non NULL\n eventtype | character varying(64) |\n policy | text |\n msg | text |\nIndex :\n \"ruddersysevents_pkey\" PRIMARY KEY, btree (id)\n \"component_idx\" btree (component)\n \"configurationruleid_idx\" btree (ruleid)\n \"executiontimestamp_idx\" btree (executiontimestamp)\n \"keyvalue_idx\" btree (keyvalue)\n \"nodeid_idx\" btree (nodeid)\nContraintes de vérification :\n \"ruddersysevents_component_check\" CHECK (component <> ''::text)\n \"ruddersysevents_configurationruleid_check\" CHECK (ruleid <> ''::text)\n \"ruddersysevents_nodeid_check\" CHECK (nodeid <> ''::text)\n \"ruddersysevents_policyinstanceid_check\" CHECK (directiveid <> \n''::text)\n\n\nIt contains 11018592 entries, with the followinf patterns :\n108492 distinct executiontimestamp\n14 distinct nodeid\n59 distinct directiveid\n26 distinct ruleid\n35 distinct serial\n\nRelated table/index size are\n relation | size\n----------------------------------------+---------\n public.ruddersysevents | 3190 MB\n public.nodeid_idx | 614 MB\n public.configurationruleid_idx | 592 MB\n public.ruddersysevents_pkey | 236 MB\n public.executiontimestamp_idx | 236 MB\n\n\nI'm crunching the data by looking for each \nnodeid/ruleid/directiveid/serial with an executiontimestamp in an interval:\n\nexplain analyze select executiondate, nodeid, ruleid, directiveid, \nserial, component, keyValue, executionTimeStamp, eventtype, policy, msg \nfrom RudderSysEvents where 1=1 and nodeId = \n'31264061-5ecb-4891-9aa4-83824178f43d' and ruleId = \n'61713ff1-aa6f-4c86-b3cb-7012bee707dd' and serial = 10 and \nexecutiontimestamp between to_timestamp('2012-11-22 16:00:16.005', \n'YYYY-MM-DD HH24:MI:SS.MS') and to_timestamp('2013-01-25 18:53:52.467', \n'YYYY-MM-DD HH24:MI:SS.MS') ORDER BY executionTimeStamp asc;\n Sort (cost=293125.41..293135.03 rows=3848 width=252) (actual \ntime=28628.922..28647.952 rows=62403 loops=1)\n Sort Key: executiontimestamp\n Sort Method: external merge Disk: 17480kB\n -> Bitmap Heap Scan on ruddersysevents (cost=74359.66..292896.27 \nrows=3848 width=252) (actual time=1243.150..28338.927 rows=62403 loops=1)\n Recheck Cond: ((nodeid = \n'31264061-5ecb-4891-9aa4-83824178f43d'::text) AND (ruleid = \n'61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text))\n Filter: ((serial = 10) AND (executiontimestamp >= \nto_timestamp('2012-11-22 16:00:16.005'::text, 'YYYY-MM-DD \nHH24:MI:SS.MS'::text)) AND (executiontimestamp <= \nto_timestamp('2013-01-25 18:53:52.467'::text, 'YYYY-MM-DD \nHH24:MI:SS.MS'::text)))\n -> BitmapAnd (cost=74359.66..74359.66 rows=90079 width=0) \n(actual time=1228.610..1228.610 rows=0 loops=1)\n -> Bitmap Index Scan on nodeid_idx \n(cost=0.00..25795.17 rows=716237 width=0) (actual time=421.365..421.365 \nrows=690503 loops=1)\n Index Cond: (nodeid = \n'31264061-5ecb-4891-9aa4-83824178f43d'::text)\n -> Bitmap Index Scan on configurationruleid_idx \n(cost=0.00..48562.32 rows=1386538 width=0) (actual time=794.490..794.490 \nrows=1381391 loops=1)\n Index Cond: (ruleid = \n'61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text)\n Total runtime: 28657.352 ms\n\n\n\nI'm surprised that the executiontimestamp index is not used, since it \nseems to be where most of the query time is spent.\n\nFor all my tests, I removed all the incoming logs, so that this table \nhas only selects and no writes\n\nI'm using Postgres 8.4, on a quite smallish VM, with some process \nrunnings, with the following non default configuration\nshared_buffers = 112MB\nwork_mem = 8MB\nmaintenance_work_mem = 48MB\nmax_stack_depth = 3MB\nwal_buffers = 1MB\neffective_cache_size = 128MB\ncheckpoint_segments = 6\n\nIncreasing the shared_buffers to 384, 1GB or 1500MB didn't improve the \nperformances (less than 10%). I would have expected it to improve, since \nthe indexes would all fit in RAM\nEvery explain analyze made after modification of configuration and \nindexes where done after a complete batch of crunch of logs by our apps, \nto be sure the stats were corrects\n\nI tested further with the indexes, and reached these results :\n1/ dropping the unique index \"configurationruleid_idx\" btree (ruleid), \n\"executiontimestamp_idx\" btree (executiontimestamp), \"keyvalue_idx\" \nbtree (keyvalue), \"nodeid_idx\" btree (nodeid) to replace them by a \nunique index did lower the perfs :\n\ncreate index composite_idx on ruddersysevents (executiontimestamp, \nruleid, serial, nodeid);\n\nIndex Scan using composite_idx on ruddersysevents (cost=0.01..497350.22 \nrows=3729 width=252) (actual time=7.989..83717.901 rows=62403 loops=1)\n Index Cond: ((executiontimestamp >= to_timestamp('2012-11-22 \n16:00:16.005'::text, 'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND \n(executiontimestamp <= to_timestamp('2013-01-25 18:53:52.467'::text, \n'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND (ruleid = \n'61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text) AND (serial = 10) AND \n(nodeid = '31264061-5ecb-4891-9aa4-83824178f43d'::text))\n\n\n2/ Removing nodeid from the index did lower again the perf\n\ncreate index composite2_idx on ruddersysevents (executiontimestamp, \nruleid, serial);\n\n Index Scan using composite2_idx on ruddersysevents \n(cost=0.01..449948.98 rows=3729 width=252) (actual \ntime=23.507..84888.349 rows=62403 loops=1)\n Index Cond: ((executiontimestamp >= to_timestamp('2012-11-22 \n16:00:16.005'::text, 'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND \n(executiontimestamp <= to_timestamp('2013-01-25 18:53:52.467'::text, \n'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND (ruleid = \n'61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text) AND (serial = 10))\n Filter: (nodeid = '31264061-5ecb-4891-9aa4-83824178f43d'::text)\n\n3/ Removing executiontimestamp from the composite index makes the query \nperforms better at the begining of its uses (around 17 secondes), but \nover time it degrades (I'm logging query longer than 20 secondes, and \nthere are very rare in the first half of the batch, and getting more and \nmore common at the end) to what is below\n\ncreate index composite3_idx on ruddersysevents (ruleid, serial, nodeid);\n Sort (cost=24683.44..24693.07 rows=3849 width=252) (actual \ntime=60454.558..60474.013 rows=62403 loops=1)\n Sort Key: executiontimestamp\n Sort Method: external merge Disk: 17480kB\n -> Bitmap Heap Scan on ruddersysevents (cost=450.12..24454.23 \nrows=3849 width=252) (actual time=146.065..60249.143 rows=62403 loops=1)\n Recheck Cond: ((ruleid = \n'61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text) AND (serial = 10) AND \n(nodeid = '31264061-5ecb-4891-9aa4-83824178f43d'::text))\n Filter: ((executiontimestamp >= to_timestamp('2012-11-22 \n16:00:16.005'::text, 'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND \n(executiontimestamp <= to_timestamp('2013-01-25 18:53:52.467'::text, \n'YYYY-MM-DD HH24:MI:SS.MS'::text)))\n -> Bitmap Index Scan on composite3_idx (cost=0.00..449.15 \nrows=6635 width=0) (actual time=129.102..129.102 rows=62403 loops=1)\n Index Cond: ((ruleid = \n'61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text) AND (serial = 10) AND \n(nodeid = '31264061-5ecb-4891-9aa4-83824178f43d'::text))\n Total runtime: 60484.022 ms\n\n\nAnd with all these tests, the difference of the whole database \nprocessing with each index time is within 10% error margin (2h10 to \n2h25), even if most of the time is spent on the SQL query.\n\nSo my question is :\n\"Why *not* indexing the column which is not used makes the query slower \nover time, while not slowing the application?\"\n\nIf you have some clues, I'm really interested.\n\nBest regards\nNicolas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 16:35:12 +0100",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": true,
"msg_subject": "Surprising no use of indexes - low performance"
},
{
"msg_contents": "\nW dniu 2013-02-14 16:35, Nicolas Charles pisze:\n> I'm crunching the data by looking for each nodeid/ruleid/directiveid/serial with an \n> executiontimestamp in an interval:\n>\n> explain analyze select executiondate, nodeid, ruleid, directiveid, serial, component, keyValue, \n> executionTimeStamp, eventtype, policy, msg from RudderSysEvents where 1=1 and nodeId = \n> '31264061-5ecb-4891-9aa4-83824178f43d' and ruleId = '61713ff1-aa6f-4c86-b3cb-7012bee707dd' and \n> serial = 10 and executiontimestamp between to_timestamp('2012-11-22 16:00:16.005', 'YYYY-MM-DD \n> HH24:MI:SS.MS') and to_timestamp('2013-01-25 18:53:52.467', 'YYYY-MM-DD HH24:MI:SS.MS') ORDER BY \n> executionTimeStamp asc;\n> Sort (cost=293125.41..293135.03 rows=3848 width=252) (actual time=28628.922..28647.952 \n> rows=62403 loops=1)\n> Sort Key: executiontimestamp\n> Sort Method: external merge Disk: 17480kB\n> -> Bitmap Heap Scan on ruddersysevents (cost=74359.66..292896.27 rows=3848 width=252) (actual \n> time=1243.150..28338.927 rows=62403 loops=1)\n> Recheck Cond: ((nodeid = '31264061-5ecb-4891-9aa4-83824178f43d'::text) AND (ruleid = \n> '61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text))\n> Filter: ((serial = 10) AND (executiontimestamp >= to_timestamp('2012-11-22 \n> 16:00:16.005'::text, 'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND (executiontimestamp <= \n> to_timestamp('2013-01-25 18:53:52.467'::text, 'YYYY-MM-DD HH24:MI:SS.MS'::text)))\n> -> BitmapAnd (cost=74359.66..74359.66 rows=90079 width=0) (actual \n> time=1228.610..1228.610 rows=0 loops=1)\n> -> Bitmap Index Scan on nodeid_idx (cost=0.00..25795.17 rows=716237 width=0) \n> (actual time=421.365..421.365 rows=690503 loops=1)\n> Index Cond: (nodeid = '31264061-5ecb-4891-9aa4-83824178f43d'::text)\n> -> Bitmap Index Scan on configurationruleid_idx (cost=0.00..48562.32 rows=1386538 \n> width=0) (actual time=794.490..794.490 rows=1381391 loops=1)\n> Index Cond: (ruleid = '61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text)\n> Total runtime: 28657.352 ms\n>\n>\n>\n> I'm surprised that the executiontimestamp index is not used, since it seems to be where most of \n> the query time is spent.\n\nthis use pattern is quite similar to the one I used to have problem with. The key problem here is \nthat planner wants to bitmapand on indexes that are spread on all the table, on all timestamp \nvalues, regardless you are interested in only a narrow timestamp window, and is quite aggressive on \nusing bitmapscan feature. So the planner needs to be directed more precisely.\n\nYou could try the above again with:\n\nSET enable_bitmapscan TO off ?\n\nIt helped in my case.\n\nYou may also try close the timestamp condition in a \"preselecting\" CTE, and doing the rest of finer \nfiltering outside of it, like:\n\nwith\np as (select * from RudderSysEvents where executiontimestamp between '2012-11-22 16:00:16.005' and \n'2013-01-25 18:53:52.467')\nselect executiondate, nodeid, ruleid, directiveid, serial, component, keyValue, executionTimeStamp, \neventtype, policy, msg\nfrom p\nwhere nodeId = '31264061-5ecb-4891-9aa4-83824178f43d' and ruleId = \n'61713ff1-aa6f-4c86-b3cb-7012bee707dd' and serial = 10\n\nAs a side note, I think that all your indexes, except the timestamp one, are unnecessary, because of \nlow distribution or their values, and, as you see, the confuse they make to the planner.\n\nEventually, you may use one of the columns as a second one to a two column index together with \ntimestamp, the one which may always be used for filtering and add its filtering inside the CTE part.\n\nHTH,\nIrek.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 19:25:54 +0100",
"msg_from": "Ireneusz Pluta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Surprising no use of indexes - low performance"
},
{
"msg_contents": "On Thu, Feb 14, 2013 at 7:35 AM, Nicolas Charles\n<[email protected]> wrote:\n>\n> It contains 11018592 entries, with the followinf patterns :\n> 108492 distinct executiontimestamp\n> 14 distinct nodeid\n> 59 distinct directiveid\n> 26 distinct ruleid\n> 35 distinct serial\n\nHow many entries fall within a typical query interval of executiontimestamp?\n\n...\n>\n> I'm surprised that the executiontimestamp index is not used, since it seems\n> to be where most of the query time is spent.\n\nI do not draw that conclusion from your posted information. Can you\nhighlight the parts of it that lead you to this conclusion?\n\n> For all my tests, I removed all the incoming logs, so that this table has\n> only selects and no writes\n>\n> I'm using Postgres 8.4, on a quite smallish VM, with some process runnings,\n\nA lot of improvements have been made since 8.4 which would make this\nkind of thing easier to figure out. What is smallish?\n\n> with the following non default configuration\n> shared_buffers = 112MB\n> work_mem = 8MB\n> maintenance_work_mem = 48MB\n> max_stack_depth = 3MB\n> wal_buffers = 1MB\n> effective_cache_size = 128MB\n\neffective_cache_size seems small unless you expect to have a lot of\nthis type of query running simultaneously, assuming you have at least\n4GB of RAM, which I'm guessing you do based on your next comments.\n\n> checkpoint_segments = 6\n>\n> Increasing the shared_buffers to 384, 1GB or 1500MB didn't improve the\n> performances (less than 10%). I would have expected it to improve, since the\n> indexes would all fit in RAM\n\nIf the indexes fit in RAM, they fit in RAM. If anything, increasing\nshared_buffers could make it harder to fit them entirely in RAM. If\nyour shared buffers undergo a lot of churn, then the OS cache and the\nshared buffers tend to uselessly mirror each other, meaning there is\nless space for non-redundant pages.\n\n>\n> create index composite_idx on ruddersysevents (executiontimestamp, ruleid,\n> serial, nodeid);\n\nI wouldn't expect this to work well for this particular query. Since\nthe leading column is used in a range test, the following columns\ncannot be used efficiently in the index structure. You should put the\nequality-tested columns at the front of the index and the range-tested\none at the end of it.\n\n\n>\n> 2/ Removing nodeid from the index did lower again the perf\n> create index composite2_idx on ruddersysevents (executiontimestamp, ruleid,\n> serial);\n\n\nI doubt that 84888.349 vs 83717.901 is really a meaningful difference.\n\n> 3/ Removing executiontimestamp from the composite index makes the query\n> performs better at the begining of its uses (around 17 secondes), but over\n> time it degrades (I'm logging query longer than 20 secondes, and there are\n> very rare in the first half of the batch, and getting more and more common\n> at the end) to what is below\n\nIf the batch processing adds data, it is not surprising the query\nslows down. It looks like it is still faster at the end then the\nprevious two cases, right?\n\n\n> So my question is :\n> \"Why *not* indexing the column which is not used makes the query slower over\n> time, while not slowing the application?\"\n\nI don't know what column you are referring to here. But it sounds\nlike you think that dropping the leading column from an index is a\nminor change. It is not. It makes a fundamentally different index.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Feb 2013 11:27:33 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Surprising no use of indexes - low performance"
},
{
"msg_contents": "On 14/02/2013 20:27, Jeff Janes wrote:\n> On Thu, Feb 14, 2013 at 7:35 AM, Nicolas Charles\n> <[email protected]> wrote:\n>> It contains 11018592 entries, with the followinf patterns :\n>> 108492 distinct executiontimestamp\n>> 14 distinct nodeid\n>> 59 distinct directiveid\n>> 26 distinct ruleid\n>> 35 distinct serial\n> How many entries fall within a typical query interval of executiontimestamp?\n\nAround 65 000 entries\n.\n>> I'm surprised that the executiontimestamp index is not used, since it seems\n>> to be where most of the query time is spent.\n> I do not draw that conclusion from your posted information. Can you\n> highlight the parts of it that lead you to this conclusion?\nThe index scan are on nodeid_idx and configurationruleid_idx, not on \nexecutiontimestamp\nOr am I misreading the output ?\n\n>> For all my tests, I removed all the incoming logs, so that this table has\n>> only selects and no writes\n>>\n>> I'm using Postgres 8.4, on a quite smallish VM, with some process runnings,\n> A lot of improvements have been made since 8.4 which would make this\n> kind of thing easier to figure out. What is smallish?\n\nA VM with 1 core, 2 GB RAM, a single hard drive\n\n>> with the following non default configuration\n>> shared_buffers = 112MB\n>> work_mem = 8MB\n>> maintenance_work_mem = 48MB\n>> max_stack_depth = 3MB\n>> wal_buffers = 1MB\n>> effective_cache_size = 128MB\n> effective_cache_size seems small unless you expect to have a lot of\n> this type of query running simultaneously, assuming you have at least\n> 4GB of RAM, which I'm guessing you do based on your next comments.\n\nFor the sake of the test, the VM got its memory increased, with no \nsignificant changes\n\n>> checkpoint_segments = 6\n>>\n>> Increasing the shared_buffers to 384, 1GB or 1500MB didn't improve the\n>> performances (less than 10%). I would have expected it to improve, since the\n>> indexes would all fit in RAM\n> If the indexes fit in RAM, they fit in RAM. If anything, increasing\n> shared_buffers could make it harder to fit them entirely in RAM. If\n> your shared buffers undergo a lot of churn, then the OS cache and the\n> shared buffers tend to uselessly mirror each other, meaning there is\n> less space for non-redundant pages.\nOh !\nSo I completely misunderstood the meaning of shared_buffer; I figured \nthat it was somehow the place where the data would be stored by postgres \n(like indexes)\n\n\n>\n>> create index composite_idx on ruddersysevents (executiontimestamp, ruleid,\n>> serial, nodeid);\n> I wouldn't expect this to work well for this particular query. Since\n> the leading column is used in a range test, the following columns\n> cannot be used efficiently in the index structure. You should put the\n> equality-tested columns at the front of the index and the range-tested\n> one at the end of it.\n>\n>\n>> 2/ Removing nodeid from the index did lower again the perf\n>> create index composite2_idx on ruddersysevents (executiontimestamp, ruleid,\n>> serial);\n>\n> I doubt that 84888.349 vs 83717.901 is really a meaningful difference.\n>\n>> 3/ Removing executiontimestamp from the composite index makes the query\n>> performs better at the begining of its uses (around 17 secondes), but over\n>> time it degrades (I'm logging query longer than 20 secondes, and there are\n>> very rare in the first half of the batch, and getting more and more common\n>> at the end) to what is below\n> If the batch processing adds data, it is not surprising the query\n> slows down. It looks like it is still faster at the end then the\n> previous two cases, right?\n\nActually, the batch reads data from this table, and writes into another. \nSo the content of the table doesn't change at all\nAnd yes, it is faster than the two previous case\n\n>> So my question is :\n>> \"Why *not* indexing the column which is not used makes the query slower over\n>> time, while not slowing the application?\"\n> I don't know what column you are referring to here. But it sounds\n> like you think that dropping the leading column from an index is a\n> minor change. It is not. It makes a fundamentally different index.\n\nI was refering to the executionTimeStamp column. I know it is a HUGE \nchange, but it's clearly not behavin the way I thought\nWith your remark I understand a little better what is going on, and I \ncan test better what I'm doing.\n\nThank you !\nNicolas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Feb 2013 10:00:24 +0100",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Surprising no use of indexes - low performance"
},
{
"msg_contents": "On Fri, Feb 15, 2013 at 1:00 AM, Nicolas Charles\n<[email protected]> wrote:\n> On 14/02/2013 20:27, Jeff Janes wrote:\n>>\n>> On Thu, Feb 14, 2013 at 7:35 AM, Nicolas Charles\n>> <[email protected]> wrote:\n>>>\n>>> It contains 11018592 entries, with the followinf patterns :\n>>> 108492 distinct executiontimestamp\n>>> 14 distinct nodeid\n>>> 59 distinct directiveid\n>>> 26 distinct ruleid\n>>> 35 distinct serial\n>>\n>> How many entries fall within a typical query interval of\n>> executiontimestamp?\n>\n>\n> Around 65 000 entries\n>\n> .\n>>>\n>>> I'm surprised that the executiontimestamp index is not used, since it\n>>> seems\n>>> to be where most of the query time is spent.\n>>\n>> I do not draw that conclusion from your posted information. Can you\n>> highlight the parts of it that lead you to this conclusion?\n>\n> The index scan are on nodeid_idx and configurationruleid_idx, not on\n> executiontimestamp\n> Or am I misreading the output ?\n\nYou are correct about the use of nodeid_idx and\nconfigurationruleid_idx. But the part about most of the time going\ninto filtering out rows that fail to match executiontimestamp is the\npart that I don't think is supported by the explained plan. The\ninformation given by the explain plan is not sufficient to decide that\none way or the other. Some more recent improvements in \"explain\n(analyze, buffers)\" would make it easier (especially with\ntrack_io_timing on) but it would still be hard to draw a definitive\nconclusion. The most definitive thing would be to do the experiment\nof adding executiontimestamp as a *trailing* column to the end of one\nof the existing indexes and see what happens.\n\n\n>> If the indexes fit in RAM, they fit in RAM. If anything, increasing\n>> shared_buffers could make it harder to fit them entirely in RAM. If\n>> your shared buffers undergo a lot of churn, then the OS cache and the\n>> shared buffers tend to uselessly mirror each other, meaning there is\n>> less space for non-redundant pages.\n>\n> Oh !\n> So I completely misunderstood the meaning of shared_buffer; I figured that\n> it was somehow the place where the data would be stored by postgres (like\n> indexes)\n\nThat is correct, it is the space used by postgres to cache data. But,\nthe rest of RAM (beyond shared_buffers) will also be used to cache\ndata; but by the OS rather than by postgres. On a dedicated server,\nthe OS will generally choose to (or at least attempt to) use this\nspace for the benefit of postgres anyway. If shared_buffers > RAM/2,\nit won't be very successful at this, but it will still try. The\nkernel and postgres do not have intimate knowledge of each other, so\nit is hard to arrange that all pages show up in just one place or the\nother but not both.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Feb 2013 10:04:07 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Surprising no use of indexes - low performance"
},
{
"msg_contents": "On Thu, Feb 14, 2013 at 9:35 AM, Nicolas Charles\n<[email protected]> wrote:\n> Hello,\n>\n> I've been struggling to understand what's happening on my databases/query\n> for several days, and I'm turning to higher mind for a logical answer.\n>\n> I'm dealing with a fairly large database, containing logs informations, that\n> I crunch to get data out of it, with several indexes on them that I hoped\n> were logical\n>\n> \\d ruddersysevents\n> Table « public.ruddersysevents »\n> Colonne | Type |\n> Modificateurs\n> --------------------+--------------------------+--------------------------------------------------\n> id | integer | non NULL Par défaut,\n> nextval('serial'::regclass)\n> executiondate | timestamp with time zone | non NULL\n> nodeid | text | non NULL\n> directiveid | text | non NULL\n> ruleid | text | non NULL\n> serial | integer | non NULL\n> component | text | non NULL\n> keyvalue | text |\n> executiontimestamp | timestamp with time zone | non NULL\n> eventtype | character varying(64) |\n> policy | text |\n> msg | text |\n> Index :\n> \"ruddersysevents_pkey\" PRIMARY KEY, btree (id)\n> \"component_idx\" btree (component)\n> \"configurationruleid_idx\" btree (ruleid)\n> \"executiontimestamp_idx\" btree (executiontimestamp)\n> \"keyvalue_idx\" btree (keyvalue)\n> \"nodeid_idx\" btree (nodeid)\n> Contraintes de vérification :\n> \"ruddersysevents_component_check\" CHECK (component <> ''::text)\n> \"ruddersysevents_configurationruleid_check\" CHECK (ruleid <> ''::text)\n> \"ruddersysevents_nodeid_check\" CHECK (nodeid <> ''::text)\n> \"ruddersysevents_policyinstanceid_check\" CHECK (directiveid <> ''::text)\n>\n>\n> It contains 11018592 entries, with the followinf patterns :\n> 108492 distinct executiontimestamp\n> 14 distinct nodeid\n> 59 distinct directiveid\n> 26 distinct ruleid\n> 35 distinct serial\n>\n> Related table/index size are\n> relation | size\n> ----------------------------------------+---------\n> public.ruddersysevents | 3190 MB\n> public.nodeid_idx | 614 MB\n> public.configurationruleid_idx | 592 MB\n> public.ruddersysevents_pkey | 236 MB\n> public.executiontimestamp_idx | 236 MB\n>\n>\n> I'm crunching the data by looking for each nodeid/ruleid/directiveid/serial\n> with an executiontimestamp in an interval:\n>\n> explain analyze select executiondate, nodeid, ruleid, directiveid, serial,\n> component, keyValue, executionTimeStamp, eventtype, policy, msg from\n> RudderSysEvents where 1=1 and nodeId =\n> '31264061-5ecb-4891-9aa4-83824178f43d' and ruleId =\n> '61713ff1-aa6f-4c86-b3cb-7012bee707dd' and serial = 10 and\n> executiontimestamp between to_timestamp('2012-11-22 16:00:16.005',\n> 'YYYY-MM-DD HH24:MI:SS.MS') and to_timestamp('2013-01-25 18:53:52.467',\n> 'YYYY-MM-DD HH24:MI:SS.MS') ORDER BY executionTimeStamp asc;\n> Sort (cost=293125.41..293135.03 rows=3848 width=252) (actual\n> time=28628.922..28647.952 rows=62403 loops=1)\n> Sort Key: executiontimestamp\n> Sort Method: external merge Disk: 17480kB\n> -> Bitmap Heap Scan on ruddersysevents (cost=74359.66..292896.27\n> rows=3848 width=252) (actual time=1243.150..28338.927 rows=62403 loops=1)\n> Recheck Cond: ((nodeid =\n> '31264061-5ecb-4891-9aa4-83824178f43d'::text) AND (ruleid =\n> '61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text))\n> Filter: ((serial = 10) AND (executiontimestamp >=\n> to_timestamp('2012-11-22 16:00:16.005'::text, 'YYYY-MM-DD\n> HH24:MI:SS.MS'::text)) AND (executiontimestamp <= to_timestamp('2013-01-25\n> 18:53:52.467'::text, 'YYYY-MM-DD HH24:MI:SS.MS'::text)))\n> -> BitmapAnd (cost=74359.66..74359.66 rows=90079 width=0) (actual\n> time=1228.610..1228.610 rows=0 loops=1)\n> -> Bitmap Index Scan on nodeid_idx (cost=0.00..25795.17\n> rows=716237 width=0) (actual time=421.365..421.365 rows=690503 loops=1)\n> Index Cond: (nodeid =\n> '31264061-5ecb-4891-9aa4-83824178f43d'::text)\n> -> Bitmap Index Scan on configurationruleid_idx\n> (cost=0.00..48562.32 rows=1386538 width=0) (actual time=794.490..794.490\n> rows=1381391 loops=1)\n> Index Cond: (ruleid =\n> '61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text)\n> Total runtime: 28657.352 ms\n>\n>\n>\n> I'm surprised that the executiontimestamp index is not used, since it seems\n> to be where most of the query time is spent.\n>\n> For all my tests, I removed all the incoming logs, so that this table has\n> only selects and no writes\n>\n> I'm using Postgres 8.4, on a quite smallish VM, with some process runnings,\n> with the following non default configuration\n> shared_buffers = 112MB\n> work_mem = 8MB\n> maintenance_work_mem = 48MB\n> max_stack_depth = 3MB\n> wal_buffers = 1MB\n> effective_cache_size = 128MB\n> checkpoint_segments = 6\n>\n> Increasing the shared_buffers to 384, 1GB or 1500MB didn't improve the\n> performances (less than 10%). I would have expected it to improve, since the\n> indexes would all fit in RAM\n> Every explain analyze made after modification of configuration and indexes\n> where done after a complete batch of crunch of logs by our apps, to be sure\n> the stats were corrects\n>\n> I tested further with the indexes, and reached these results :\n> 1/ dropping the unique index \"configurationruleid_idx\" btree (ruleid),\n> \"executiontimestamp_idx\" btree (executiontimestamp), \"keyvalue_idx\" btree\n> (keyvalue), \"nodeid_idx\" btree (nodeid) to replace them by a unique index\n> did lower the perfs :\n>\n> create index composite_idx on ruddersysevents (executiontimestamp, ruleid,\n> serial, nodeid);\n>\n> Index Scan using composite_idx on ruddersysevents (cost=0.01..497350.22\n> rows=3729 width=252) (actual time=7.989..83717.901 rows=62403 loops=1)\n> Index Cond: ((executiontimestamp >= to_timestamp('2012-11-22\n> 16:00:16.005'::text, 'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND\n> (executiontimestamp <= to_timestamp('2013-01-25 18:53:52.467'::text,\n> 'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND (ruleid =\n> '61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text) AND (serial = 10) AND (nodeid\n> = '31264061-5ecb-4891-9aa4-83824178f43d'::text))\n>\n>\n> 2/ Removing nodeid from the index did lower again the perf\n>\n> create index composite2_idx on ruddersysevents (executiontimestamp, ruleid,\n> serial);\n>\n> Index Scan using composite2_idx on ruddersysevents (cost=0.01..449948.98\n> rows=3729 width=252) (actual time=23.507..84888.349 rows=62403 loops=1)\n> Index Cond: ((executiontimestamp >= to_timestamp('2012-11-22\n> 16:00:16.005'::text, 'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND\n> (executiontimestamp <= to_timestamp('2013-01-25 18:53:52.467'::text,\n> 'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND (ruleid =\n> '61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text) AND (serial = 10))\n> Filter: (nodeid = '31264061-5ecb-4891-9aa4-83824178f43d'::text)\n>\n> 3/ Removing executiontimestamp from the composite index makes the query\n> performs better at the begining of its uses (around 17 secondes), but over\n> time it degrades (I'm logging query longer than 20 secondes, and there are\n> very rare in the first half of the batch, and getting more and more common\n> at the end) to what is below\n>\n> create index composite3_idx on ruddersysevents (ruleid, serial, nodeid);\n> Sort (cost=24683.44..24693.07 rows=3849 width=252) (actual\n> time=60454.558..60474.013 rows=62403 loops=1)\n> Sort Key: executiontimestamp\n> Sort Method: external merge Disk: 17480kB\n> -> Bitmap Heap Scan on ruddersysevents (cost=450.12..24454.23 rows=3849\n> width=252) (actual time=146.065..60249.143 rows=62403 loops=1)\n> Recheck Cond: ((ruleid =\n> '61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text) AND (serial = 10) AND (nodeid\n> = '31264061-5ecb-4891-9aa4-83824178f43d'::text))\n> Filter: ((executiontimestamp >= to_timestamp('2012-11-22\n> 16:00:16.005'::text, 'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND\n> (executiontimestamp <= to_timestamp('2013-01-25 18:53:52.467'::text,\n> 'YYYY-MM-DD HH24:MI:SS.MS'::text)))\n> -> Bitmap Index Scan on composite3_idx (cost=0.00..449.15\n> rows=6635 width=0) (actual time=129.102..129.102 rows=62403 loops=1)\n> Index Cond: ((ruleid =\n> '61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text) AND (serial = 10) AND (nodeid\n> = '31264061-5ecb-4891-9aa4-83824178f43d'::text))\n> Total runtime: 60484.022 ms\n>\n>\n> And with all these tests, the difference of the whole database processing\n> with each index time is within 10% error margin (2h10 to 2h25), even if most\n> of the time is spent on the SQL query.\n>\n> So my question is :\n> \"Why *not* indexing the column which is not used makes the query slower over\n> time, while not slowing the application?\"\n>\n> If you have some clues, I'm really interested.\n\nThis query can be optimized to zero (assuming data in cache and not\ntoo many dead rows).\n\ncreate one index on nodeId, ruleId, serial, executiontimestamp columns and do:\n\nSELECT\n executiondate, nodeid, ruleid, directiveid, serial, component,\nkeyValue, executionTimeStamp, eventtype, policy, msg\nFROM RudderSysEvents\n WHERE (nodeId, ruleId, serial, executiontimestamp) >= (\n '31264061-5ecb-4891-9aa4-83824178f43d',\n '61713ff1-aa6f-4c86-b3cb-7012bee707dd',\n 10, to_timestamp('2012-11-22 16:00:16.005', 'YYYY-MM-DD HH24:MI:SS.MS'))\n AND (nodeId, ruleId, serial, executiontimestamp) <= (\n '31264061-5ecb-4891-9aa4-83824178f43d',\n '61713ff1-aa6f-4c86-b3cb-7012bee707dd',\n 10, to_timestamp('2013-01-25 18:53:52.467', 'YYYY-MM-DD HH24:MI:SS.MS'))\nORDER BY nodeId, ruleId, serial, executiontimestamp\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Feb 2013 15:27:37 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Surprising no use of indexes - low performance"
},
{
"msg_contents": "Le 15/02/2013 19:04, Jeff Janes a écrit :\n> On Fri, Feb 15, 2013 at 1:00 AM, Nicolas Charles\n> <[email protected]> wrote:\n>> On 14/02/2013 20:27, Jeff Janes wrote:\n>>>> I'm surprised that the executiontimestamp index is not used, since it\n>>>> seems\n>>>> to be where most of the query time is spent.\n>>> I do not draw that conclusion from your posted information. Can you\n>>> highlight the parts of it that lead you to this conclusion?\n>> The index scan are on nodeid_idx and configurationruleid_idx, not on\n>> executiontimestamp\n>> Or am I misreading the output ?\n> You are correct about the use of nodeid_idx and\n> configurationruleid_idx. But the part about most of the time going\n> into filtering out rows that fail to match executiontimestamp is the\n> part that I don't think is supported by the explained plan. The\n> information given by the explain plan is not sufficient to decide that\n> one way or the other. Some more recent improvements in \"explain\n> (analyze, buffers)\" would make it easier (especially with\n> track_io_timing on) but it would still be hard to draw a definitive\n> conclusion. The most definitive thing would be to do the experiment\n> of adding executiontimestamp as a *trailing* column to the end of one\n> of the existing indexes and see what happens.\nI added this index\n\"composite_idx\" btree (ruleid, serial, executiontimestamp)\nand lowered the shared_buffer to 54 MB.\n\nFor this specific query, it is indeed a bit better. (23s against 28s)\n\n Sort (cost=43449.44..43459.07 rows=3854 width=252) (actual \ntime=23375.247..23394.704 rows=62403 loops=1)\n Sort Key: executiontimestamp\n Sort Method: external merge Disk: 17480kB\n -> Bitmap Heap Scan on ruddersysevents (cost=28884.44..43219.89 \nrows=3854 width=252) (actual time=1528.704..23155.991 rows=62403 loops=1)\n Recheck Cond: ((ruleid = \n'61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text) AND (serial = 10) AND \n(executiontimestamp >= to_timestamp('2012-11-22 16:00:16.005'::text, \n'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND (executiontimestamp <= \nto_timestamp('2013-01-25 18:53:52.467'::text, 'YYYY-MM-DD \nHH24:MI:SS.MS'::text)) AND (nodeid = \n'31264061-5ecb-4891-9aa4-83824178f43d'::text))\n -> BitmapAnd (cost=28884.44..28884.44 rows=3854 width=0) \n(actual time=1517.310..1517.310 rows=0 loops=1)\n -> Bitmap Index Scan on composite_idx \n(cost=0.00..3062.16 rows=59325 width=0) (actual time=775.418..775.418 \nrows=811614 loops=1)\n Index Cond: ((ruleid = \n'61713ff1-aa6f-4c86-b3cb-7012bee707dd'::text) AND (serial = 10) AND \n(executiontimestamp >= to_timestamp('2012-11-22 16:00:16.005'::text, \n'YYYY-MM-DD HH24:MI:SS.MS'::text)) AND (executiontimestamp <= \nto_timestamp('2013-01-25 18:53:52.467'::text, 'YYYY-MM-DD \nHH24:MI:SS.MS'::text)))\n -> Bitmap Index Scan on nodeid_idx (cost=0.00..25820.11 \nrows=717428 width=0) (actual time=714.254..714.254 rows=690503 loops=1)\n Index Cond: (nodeid = \n'31264061-5ecb-4891-9aa4-83824178f43d'::text)\n Total runtime: 23419.411 ms\n(11 lignes)\n\nBut since there were a lot of Sort Method: external merge, and a lot of \nobject instanciations in our batch, we tried to split the job on 5 days \ninterval, so the average number of line returned is closed to 5000\n\nLoad/IO of the system was *significantly* lower during the batch \ntreatment with this index than with the previous attempt (load around .7 \ninstead of 1.8)\n\nAnd batch execution time is now 1h08 rather than 2h20\n\n>>> If the indexes fit in RAM, they fit in RAM. If anything, increasing\n>>> shared_buffers could make it harder to fit them entirely in RAM. If\n>>> your shared buffers undergo a lot of churn, then the OS cache and the\n>>> shared buffers tend to uselessly mirror each other, meaning there is\n>>> less space for non-redundant pages.\n>> Oh !\n>> So I completely misunderstood the meaning of shared_buffer; I figured that\n>> it was somehow the place where the data would be stored by postgres (like\n>> indexes)\n> That is correct, it is the space used by postgres to cache data. But,\n> the rest of RAM (beyond shared_buffers) will also be used to cache\n> data; but by the OS rather than by postgres. On a dedicated server,\n> the OS will generally choose to (or at least attempt to) use this\n> space for the benefit of postgres anyway. If shared_buffers > RAM/2,\n> it won't be very successful at this, but it will still try. The\n> kernel and postgres do not have intimate knowledge of each other, so\n> it is hard to arrange that all pages show up in just one place or the\n> other but not both.\n>\n\n\nThank you very much for your explainations !\n\nNicolas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 16 Feb 2013 10:25:30 +0100",
"msg_from": "Nicolas Charles <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Surprising no use of indexes - low performance"
}
] |
[
{
"msg_contents": "Hi!\nI'm new to this mailinglist and I'm new to postgres as well. It is about\nour own backup software (java); we switched the DB from MySQL to\npostgres and we need some help.\nThe backup database holds all files from the server in the database. On\nmy testing platform there are about 40 mio rows and the DB size grew to\nabout 70 GB (the same DB was about 11 GB with MySQL, but this is another\nissue).\n\nBefore each backup job, there is a reset to get a consistent state of\nall files:\nUPDATE BackupFiles SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\ncStatus='NEW'::StatusT, bOnSetBlue=false, bOnSetYellow=false,\nnLastBackupTS= '0001-01-01 00:00:00' WHERE cStatus='NEW' OR\ncStatus='WRITING'OR cStatus='ONTAPE';\n\nExplain analyze: http://explain.depesz.com/s/8y5\nThe statement takes 60-90 minutes. In MySQL the same statement takes\nabout 2 minutes on the same HW.\nI tried to optimize the settings but until now without success.\n\nCan we optimize this update statement somehow? Do you have any other ideas?\n\nAny help is appreciated! Thank you!\n\n*My current config:*\nshared_buffers = 2GB\nwork_mem = 16MB\nwal_buffers = 16MB\ncheckpoint_segments = 45\nrandom_page_cost = 2.0\neffective_cache_size = 6GB\n\n*HW:*\n2x Intel E5405 @ 2.00GHz\n8 GB RAM\n3ware 9650SE-16ML RAID Controller, all caches enabled\nDB is on a RAID6 with 14x 1TB (since the DB partition on the RAID1 was\ntoo small)\n\n*SW:*\nDebian Squeeze 6.0 with Kernel 3.5.4\nPostgres 8.4.13 (standard Debian package)\n\n*Table:*\n+---------------+-----------------------------+---------------------------------------------------------------------+\n| Column | Type \n| Modifiers |\n+---------------+-----------------------------+---------------------------------------------------------------------+\n| _rowid | bigint | not null default\nnextval('backupfiles__rowid_seq'::regclass) |\n| cfilename | bytea | not\nnull |\n| nfilesize | bigint | not null default\n0::bigint |\n| nfilectimets | timestamp without time zone | not null default\n'1970-01-01 00:00:00'::timestamp without time zone |\n| ntapenr | integer | not null default\n0 |\n| nafiocounter | bigint | not null default\n0::bigint |\n| nblockcounter | bigint | not null default\n0::bigint |\n| cstatus | statust | not null default\n'NEW'::statust |\n| bonsetblue | boolean | default\nfalse |\n| bonsetyellow | boolean | default\nfalse |\n| nlastbackupts | timestamp without time zone | not null default\n'1970-01-01 00:00:00'::timestamp without time zone |\n+---------------+-----------------------------+---------------------------------------------------------------------+\nIndexes:\n \"backupfiles_pkey\" PRIMARY KEY, btree (_rowid)\n \"cfilename_index\" btree (cfilename)\n \"cstatus_index\" btree (cstatus)\n \"nfilectimets_index\" btree (nfilectimets)\n \"ntapenr_index\" btree (ntapenr)\n\n\n*Example row:*\n+--------+-----------------------------+-----------+---------------------+---------+--------------+---------------+---------+------------+--------------+---------------------+\n| _rowid | cfilename | nfilesize | nfilectimets \n| ntapenr | nafiocounter | nblockcounter | cstatus | bonsetblue |\nbonsetyellow | nlastbackupts |\n+--------+-----------------------------+-----------+---------------------+---------+--------------+---------------+---------+------------+--------------+---------------------+\n| 1 | /dicom/log/datadir_init.log | 1790 | 2013-01-30 14:02:48\n| 0 | 0 | 0 | NEW | f |\nf | 0001-01-01 00:00:00 |\n+--------+-----------------------------+-----------+---------------------+---------+--------------+---------------+---------+------------+--------------+---------------------+\n\n\n\n\n-- \n\nMit freundlichen Grüßen\nBest regards\n\nFlorian Schröck\nIT Services\n\naycan Digitalsysteme GmbH\nInnere Aumühlstr. 5\n97076 Würzburg . Germany\nTel. +49 (0)9 31. 270 40 88\nFax +49 (0)9 31. 270 40 89\nmailto:[email protected]\nmailto:[email protected]\nhttp://www.aycan.de\n\nGeschäftsführer: Dipl.-Ing. Stephan Popp\nSitz der Gesellschaft: Würzburg\nEingetragen beim Amtsgericht Würzburg unter HRB 6043\nUst-Id Nr. DE 190658226\n\n\naycan PACS\n\nIhre Vorteile: www.aycan.de/pacs-wechsel <http://www.aycan.de/pacs-wechsel>\n\nWas ist ein VNA?: www.aycan.de/vna <http://www.aycan.de/vna>",
"msg_date": "Fri, 15 Feb 2013 15:30:55 +0100",
"msg_from": "=?ISO-8859-15?Q?Florian_Schr=F6ck?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow update statement on 40mio rows"
},
{
"msg_contents": "Florian Schröck <[email protected]> wrote:\n\n> UPDATE BackupFiles\n> SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\n> cStatus='NEW'::StatusT, bOnSetBlue=false,\n> bOnSetYellow=false, nLastBackupTS= '0001-01-01 00:00:00'\n> WHERE cStatus='NEW' OR cStatus='WRITING' OR cStatus='ONTAPE';\n>\n> Explain analyze: http://explain.depesz.com/s/8y5\n> The statement takes 60-90 minutes.\n\nThe EXPLAIN ANALYZE at that URL shows a runtime of 3 minutes and 41\nseconds.\n\n> I tried to optimize the settings but until now without success.\n>\n> Can we optimize this update statement somehow? Do you have any\n> other ideas?\n\nAre there any rows which would already have the values that you are\nsetting? If so, it would be faster to skip those by using this\nquery:\n\nUPDATE BackupFiles\n SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\n cStatus='NEW'::StatusT, bOnSetBlue=false,\n bOnSetYellow=false, nLastBackupTS= '0001-01-01 00:00:00'\n WHERE (cStatus='NEW' OR cStatus='WRITING' OR cStatus='ONTAPE')\n AND (nTapeNr <> 0 OR nAFIOCounter <> 0 OR nBlockCounter <> 0\n OR cStatus <> 'NEW'::StatusT\n OR bOnSetBlue IS DISTINCT FROM false\n OR bOnSetYellow IS DISTINCT FROM false\n OR nLastBackupTS <> '0001-01-01 00:00:00');\n\nAnother way to accomplish this is with the\nsuppress_redundant_updates_trigger() trigger function:\n\nhttp://www.postgresql.org/docs/9.2/interactive/functions-trigger.html\n\n-- \nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Feb 2013 06:59:17 -0800 (PST)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow update statement on 40mio rows"
},
{
"msg_contents": "Hello Kevin,\nnot updating every row which doesn't need the update solved the problem!\nYour query took only 1 minute. :)\n\nThank you so much for the fast response, have a great weekend!\n\nPS: When you switch to \"TEXT\" on the explain URL you can see the final\nruntime which was 66 minutes with the original statement.\n\nBest regards,\nFlorian\n\nOn 02/15/2013 03:59 PM, Kevin Grittner wrote:\n> Florian Schröck <[email protected]> wrote:\n>\n>> UPDATE BackupFiles\n>> SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\n>> cStatus='NEW'::StatusT, bOnSetBlue=false,\n>> bOnSetYellow=false, nLastBackupTS= '0001-01-01 00:00:00'\n>> WHERE cStatus='NEW' OR cStatus='WRITING' OR cStatus='ONTAPE';\n>>\n>> Explain analyze: http://explain.depesz.com/s/8y5\n>> The statement takes 60-90 minutes.\n> The EXPLAIN ANALYZE at that URL shows a runtime of 3 minutes and 41\n> seconds.\n>\n>> I tried to optimize the settings but until now without success.\n>>\n>> Can we optimize this update statement somehow? Do you have any\n>> other ideas?\n> Are there any rows which would already have the values that you are\n> setting? If so, it would be faster to skip those by using this\n> query:\n>\n> UPDATE BackupFiles\n> SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\n> cStatus='NEW'::StatusT, bOnSetBlue=false,\n> bOnSetYellow=false, nLastBackupTS= '0001-01-01 00:00:00'\n> WHERE (cStatus='NEW' OR cStatus='WRITING' OR cStatus='ONTAPE')\n> AND (nTapeNr <> 0 OR nAFIOCounter <> 0 OR nBlockCounter <> 0\n> OR cStatus <> 'NEW'::StatusT\n> OR bOnSetBlue IS DISTINCT FROM false\n> OR bOnSetYellow IS DISTINCT FROM false\n> OR nLastBackupTS <> '0001-01-01 00:00:00');\n>\n> Another way to accomplish this is with the\n> suppress_redundant_updates_trigger() trigger function:\n>\n> http://www.postgresql.org/docs/9.2/interactive/functions-trigger.html\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Feb 2013 16:32:10 +0100",
"msg_from": "=?ISO-8859-1?Q?Florian_Schr=F6ck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow update statement on 40mio rows"
},
{
"msg_contents": "On Fri, Feb 15, 2013 at 9:32 AM, Florian Schröck <[email protected]> wrote:\n> Hello Kevin,\n> not updating every row which doesn't need the update solved the problem!\n> Your query took only 1 minute. :)\n>\n> Thank you so much for the fast response, have a great weekend!\n>\n> PS: When you switch to \"TEXT\" on the explain URL you can see the final\n> runtime which was 66 minutes with the original statement.\n>\n> Best regards,\n> Florian\n>\n> On 02/15/2013 03:59 PM, Kevin Grittner wrote:\n>> Florian Schröck <[email protected]> wrote:\n>>\n>>> UPDATE BackupFiles\n>>> SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\n>>> cStatus='NEW'::StatusT, bOnSetBlue=false,\n>>> bOnSetYellow=false, nLastBackupTS= '0001-01-01 00:00:00'\n>>> WHERE cStatus='NEW' OR cStatus='WRITING' OR cStatus='ONTAPE';\n>>>\n>>> Explain analyze: http://explain.depesz.com/s/8y5\n>>> The statement takes 60-90 minutes.\n>> The EXPLAIN ANALYZE at that URL shows a runtime of 3 minutes and 41\n>> seconds.\n>>\n>>> I tried to optimize the settings but until now without success.\n>>>\n>>> Can we optimize this update statement somehow? Do you have any\n>>> other ideas?\n>> Are there any rows which would already have the values that you are\n>> setting? If so, it would be faster to skip those by using this\n>> query:\n>>\n>> UPDATE BackupFiles\n>> SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\n>> cStatus='NEW'::StatusT, bOnSetBlue=false,\n>> bOnSetYellow=false, nLastBackupTS= '0001-01-01 00:00:00'\n>> WHERE (cStatus='NEW' OR cStatus='WRITING' OR cStatus='ONTAPE')\n>> AND (nTapeNr <> 0 OR nAFIOCounter <> 0 OR nBlockCounter <> 0\n>> OR cStatus <> 'NEW'::StatusT\n>> OR bOnSetBlue IS DISTINCT FROM false\n>> OR bOnSetYellow IS DISTINCT FROM false\n>> OR nLastBackupTS <> '0001-01-01 00:00:00');\n>>\n>> Another way to accomplish this is with the\n>> suppress_redundant_updates_trigger() trigger function:\n>>\n>> http://www.postgresql.org/docs/9.2/interactive/functions-trigger.html\n\nif the number of rows you actually update is not very large relative\nto size of the table, just for fun, try this:\n\n\nCREATE OR REPLACE FUNCTION BakupFilesCandidateReset(BackupFiles)\nRETURNS BOOL AS\n$$\n SELECT ($1).cStatus IN('NEW', 'WRITING', 'ONTAPE')\n AND\n (($1).nTapeNr, ($1).nAFIOCounter, ($1).nBlockCounter,\n($1).cStatus, ($1).bOnSetBlue, ($1).bOnSetYellow, ($1).nLastBackupTS)\n IS DISTINCT FROM /* simple != will suffice if values are never null */\n (0, 0, 0, 'NEW'::StatusT, false, false, '0001-01-01 00:00:00');\n$$ LANGUAGE SQL IMMUTABLE;\n\nCREATE INDEX ON BackupFiles(BakupFilesCandidateReset(BackupFiles))\n WHERE BakupFilesCandidateReset(BackupFiles);\n\n\nSELECT * FROM BackupFiles WHERE BakupFilesCandidateReset(BackupFiles);\nUPDATE BackupFiles SET ... WHERE BakupFilesCandidateReset(BackupFiles);\netc\n\nidea here is to maintain partial boolean index representing candidate\nrecords to update. plus it's nifty. this is basic mechanism that\ncan be used as foundation for very fast push pull queues.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 14:04:22 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow update statement on 40mio rows"
},
{
"msg_contents": "Hello Merlin,\nthanks for the feedback, I forwarded this to my developer, this is an\ninteresting approach.\n\n\n-- \n\nBest regards\n\nFlorian Schröck\n\nOn 02/19/2013 09:04 PM, Merlin Moncure wrote:\n> On Fri, Feb 15, 2013 at 9:32 AM, Florian Schröck <[email protected]> wrote:\n>> Hello Kevin,\n>> not updating every row which doesn't need the update solved the problem!\n>> Your query took only 1 minute. :)\n>>\n>> Thank you so much for the fast response, have a great weekend!\n>>\n>> PS: When you switch to \"TEXT\" on the explain URL you can see the final\n>> runtime which was 66 minutes with the original statement.\n>>\n>> Best regards,\n>> Florian\n>>\n>> On 02/15/2013 03:59 PM, Kevin Grittner wrote:\n>>> Florian Schröck <[email protected]> wrote:\n>>>\n>>>> UPDATE BackupFiles\n>>>> SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\n>>>> cStatus='NEW'::StatusT, bOnSetBlue=false,\n>>>> bOnSetYellow=false, nLastBackupTS= '0001-01-01 00:00:00'\n>>>> WHERE cStatus='NEW' OR cStatus='WRITING' OR cStatus='ONTAPE';\n>>>>\n>>>> Explain analyze: http://explain.depesz.com/s/8y5\n>>>> The statement takes 60-90 minutes.\n>>> The EXPLAIN ANALYZE at that URL shows a runtime of 3 minutes and 41\n>>> seconds.\n>>>\n>>>> I tried to optimize the settings but until now without success.\n>>>>\n>>>> Can we optimize this update statement somehow? Do you have any\n>>>> other ideas?\n>>> Are there any rows which would already have the values that you are\n>>> setting? If so, it would be faster to skip those by using this\n>>> query:\n>>>\n>>> UPDATE BackupFiles\n>>> SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\n>>> cStatus='NEW'::StatusT, bOnSetBlue=false,\n>>> bOnSetYellow=false, nLastBackupTS= '0001-01-01 00:00:00'\n>>> WHERE (cStatus='NEW' OR cStatus='WRITING' OR cStatus='ONTAPE')\n>>> AND (nTapeNr <> 0 OR nAFIOCounter <> 0 OR nBlockCounter <> 0\n>>> OR cStatus <> 'NEW'::StatusT\n>>> OR bOnSetBlue IS DISTINCT FROM false\n>>> OR bOnSetYellow IS DISTINCT FROM false\n>>> OR nLastBackupTS <> '0001-01-01 00:00:00');\n>>>\n>>> Another way to accomplish this is with the\n>>> suppress_redundant_updates_trigger() trigger function:\n>>>\n>>> http://www.postgresql.org/docs/9.2/interactive/functions-trigger.html\n> if the number of rows you actually update is not very large relative\n> to size of the table, just for fun, try this:\n>\n>\n> CREATE OR REPLACE FUNCTION BakupFilesCandidateReset(BackupFiles)\n> RETURNS BOOL AS\n> $$\n> SELECT ($1).cStatus IN('NEW', 'WRITING', 'ONTAPE')\n> AND\n> (($1).nTapeNr, ($1).nAFIOCounter, ($1).nBlockCounter,\n> ($1).cStatus, ($1).bOnSetBlue, ($1).bOnSetYellow, ($1).nLastBackupTS)\n> IS DISTINCT FROM /* simple != will suffice if values are never null */\n> (0, 0, 0, 'NEW'::StatusT, false, false, '0001-01-01 00:00:00');\n> $$ LANGUAGE SQL IMMUTABLE;\n>\n> CREATE INDEX ON BackupFiles(BakupFilesCandidateReset(BackupFiles))\n> WHERE BakupFilesCandidateReset(BackupFiles);\n>\n>\n> SELECT * FROM BackupFiles WHERE BakupFilesCandidateReset(BackupFiles);\n> UPDATE BackupFiles SET ... WHERE BakupFilesCandidateReset(BackupFiles);\n> etc\n>\n> idea here is to maintain partial boolean index representing candidate\n> records to update. plus it's nifty. this is basic mechanism that\n> can be used as foundation for very fast push pull queues.\n>\n> merlin\n>\n\n\n\n\n\n\n\nHello Merlin,\n thanks for the feedback, I forwarded this to my developer, this\n is an interesting approach.\n\n\n\n\n\n\n\n\n\n-- \n\n\nBest regards\n\nFlorian Schröck\n\n\n \n\n\n\n\n\n\n\n On 02/19/2013 09:04 PM, Merlin Moncure wrote:\n\n\nOn Fri, Feb 15, 2013 at 9:32 AM, Florian Schröck <[email protected]> wrote:\n\n\nHello Kevin,\nnot updating every row which doesn't need the update solved the problem!\nYour query took only 1 minute. :)\n\nThank you so much for the fast response, have a great weekend!\n\nPS: When you switch to \"TEXT\" on the explain URL you can see the final\nruntime which was 66 minutes with the original statement.\n\nBest regards,\nFlorian\n\nOn 02/15/2013 03:59 PM, Kevin Grittner wrote:\n\n\nFlorian Schröck <[email protected]> wrote:\n\n\n\nUPDATE BackupFiles\n SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\n cStatus='NEW'::StatusT, bOnSetBlue=false,\n bOnSetYellow=false, nLastBackupTS= '0001-01-01 00:00:00'\n WHERE cStatus='NEW' OR cStatus='WRITING' OR cStatus='ONTAPE';\n\nExplain analyze: http://explain.depesz.com/s/8y5\nThe statement takes 60-90 minutes.\n\n\nThe EXPLAIN ANALYZE at that URL shows a runtime of 3 minutes and 41\nseconds.\n\n\n\nI tried to optimize the settings but until now without success.\n\nCan we optimize this update statement somehow? Do you have any\nother ideas?\n\n\nAre there any rows which would already have the values that you are\nsetting? If so, it would be faster to skip those by using this\nquery:\n\nUPDATE BackupFiles\n SET nTapeNr=0, nAFIOCounter=0, nBlockCounter=0,\n cStatus='NEW'::StatusT, bOnSetBlue=false,\n bOnSetYellow=false, nLastBackupTS= '0001-01-01 00:00:00'\n WHERE (cStatus='NEW' OR cStatus='WRITING' OR cStatus='ONTAPE')\n AND (nTapeNr <> 0 OR nAFIOCounter <> 0 OR nBlockCounter <> 0\n OR cStatus <> 'NEW'::StatusT\n OR bOnSetBlue IS DISTINCT FROM false\n OR bOnSetYellow IS DISTINCT FROM false\n OR nLastBackupTS <> '0001-01-01 00:00:00');\n\nAnother way to accomplish this is with the\nsuppress_redundant_updates_trigger() trigger function:\n\nhttp://www.postgresql.org/docs/9.2/interactive/functions-trigger.html\n\n\n\n\nif the number of rows you actually update is not very large relative\nto size of the table, just for fun, try this:\n\n\nCREATE OR REPLACE FUNCTION BakupFilesCandidateReset(BackupFiles)\nRETURNS BOOL AS\n$$\n SELECT ($1).cStatus IN('NEW', 'WRITING', 'ONTAPE')\n AND\n (($1).nTapeNr, ($1).nAFIOCounter, ($1).nBlockCounter,\n($1).cStatus, ($1).bOnSetBlue, ($1).bOnSetYellow, ($1).nLastBackupTS)\n IS DISTINCT FROM /* simple != will suffice if values are never null */\n (0, 0, 0, 'NEW'::StatusT, false, false, '0001-01-01 00:00:00');\n$$ LANGUAGE SQL IMMUTABLE;\n\nCREATE INDEX ON BackupFiles(BakupFilesCandidateReset(BackupFiles))\n WHERE BakupFilesCandidateReset(BackupFiles);\n\n\nSELECT * FROM BackupFiles WHERE BakupFilesCandidateReset(BackupFiles);\nUPDATE BackupFiles SET ... WHERE BakupFilesCandidateReset(BackupFiles);\netc\n\nidea here is to maintain partial boolean index representing candidate\nrecords to update. plus it's nifty. this is basic mechanism that\ncan be used as foundation for very fast push pull queues.\n\nmerlin",
"msg_date": "Tue, 26 Feb 2013 09:01:14 +0100",
"msg_from": "=?ISO-8859-1?Q?Florian_Schr=F6ck?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow update statement on 40mio rows"
}
] |
[
{
"msg_contents": "\nOn 2012-10-09 23:09:21\nTom Lane wrote:\n > \n\n\n> re subject Why am I getting great/terrible estimates with these CTE queries?\n > You're assuming the case where the estimate is better is better for a\n> reason ... but it's only better as a result of blind dumb luck. The\n> outer-level query planner doesn't know anything about the CTE's output\n> except the estimated number of rows --- in particular, it doesn't drill\n> down to find any statistics about the join column. \n> \n\nI am also struggling with a problem involving CTEs although in my case\nit is caused by huge *under*-estimation of cardinality rather then *over*-estimation.\nThe statement is quite complex and the problem arises because there is a chain of\nRECURSIVE CTEs each defined as a query involving an earlier CTE and more tables.\nEventually there is no hope for making a good cardinality estimate.\n\nOne CTE in particular has a cardinality estimate of 1 (I guess the actual\nestimate is nearer zero and rounded up) but actual count is over 100000.\nThe planner puts this CTE as inner of a nested loop accessed by simple linear CTE scan\nand the full query then takes over 20 minutes.\n\n -> Nested Loop (cost=0.00..0.06 rows=1 width=588) (actual time=2340.421..1201593.856 rows=105984 loops=1)\n Join Filter: ((winnum.subnet_id = binoptasc.subnet_id) AND (winnum.option_code = binoptasc.option_code) AND ((winnum.option_discriminator)::text = (binoptasc.option_discriminator)::text) AND (winnum.net_rel_level = binoptasc.net_rel_level))\n Rows Removed by Join Filter: 7001612448\n Buffers: shared hit=2290941\n -> CTE Scan on winning_option_nums winnum (cost=0.00..0.02 rows=1 width=536) (actual time=2338.422..2543.684 rows=62904 loops=1)\n Buffers: shared hit=2290941\n -> CTE Scan on subnet_inhrt_options_asc binoptasc (cost=0.00..0.02 rows=1 width=584) (actual time=0.000..9.728 rows=111308 loops=62904)\n\nWhereas, (by altering various statistics to be very wrong) the entire query runs in 21 seconds.\n\nThere have been several debates about how to address situations like this where\nno practical non-query-specific statistics-gathering scheme can ever hope to\ngather enough statistics to model the later derived tables. E.g. the frowned-on\nSELECTIVITY clause and ideas for query-specific statistics.\n\nMeanwhile, I have one other suggestion aimed specifically at problematic CTEs:\nWould it be reasonable to provide a new Planner Configuration option :\n\n enable_nestloop_cte_inner (boolean)\n Enables or disables the query planner's use of nested-loop join plans in which a CTE is the inner.\n It is impossible to suppress such nested-loop joins entirely,\n but turning this variable off discourages the planner from using one\n if there are other methods available, such as sorting the CTE for merge-join\n or hashing it for hash-join.\n The default is on.\n\nJohn\n\n\n\n\n \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Feb 2013 11:45:01 -0500",
"msg_from": "John Lumby <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow query plans caused by under-estimation of CTE cardinality"
},
{
"msg_contents": "Since cte is already an optimization fence, you can go further and make it\ntemporary table.\nCreate table;analyze;select should make optimizer's work much easier.\n18 лют. 2013 18:45, \"John Lumby\" <[email protected]> напис.\n\n>\n> On 2012-10-09 23:09:21\n> Tom Lane wrote:\n> >\n>\n>\n> > re subject Why am I getting great/terrible estimates with these CTE\n> queries?\n> > You're assuming the case where the estimate is better is better for a\n> > reason ... but it's only better as a result of blind dumb luck. The\n> > outer-level query planner doesn't know anything about the CTE's output\n> > except the estimated number of rows --- in particular, it doesn't drill\n> > down to find any statistics about the join column.\n> >\n>\n> I am also struggling with a problem involving CTEs although in my case\n> it is caused by huge *under*-estimation of cardinality rather then\n> *over*-estimation.\n> The statement is quite complex and the problem arises because there is a\n> chain of\n> RECURSIVE CTEs each defined as a query involving an earlier CTE and more\n> tables.\n> Eventually there is no hope for making a good cardinality estimate.\n>\n> One CTE in particular has a cardinality estimate of 1 (I guess the actual\n> estimate is nearer zero and rounded up) but actual count is over 100000.\n> The planner puts this CTE as inner of a nested loop accessed by simple\n> linear CTE scan\n> and the full query then takes over 20 minutes.\n>\n> -> Nested Loop (cost=0.00..0.06 rows=1 width=588) (actual\n> time=2340.421..1201593.856 rows=105984 loops=1)\n> Join Filter: ((winnum.subnet_id = binoptasc.subnet_id) AND\n> (winnum.option_code = binoptasc.option_code) AND\n> ((winnum.option_discriminator)::text =\n> (binoptasc.option_discriminator)::text) AND (winnum.net_rel_level =\n> binoptasc.net_rel_level))\n> Rows Removed by Join Filter: 7001612448\n> Buffers: shared hit=2290941\n> -> CTE Scan on winning_option_nums winnum (cost=0.00..0.02\n> rows=1 width=536) (actual time=2338.422..2543.684 rows=62904 loops=1)\n> Buffers: shared hit=2290941\n> -> CTE Scan on subnet_inhrt_options_asc binoptasc\n> (cost=0.00..0.02 rows=1 width=584) (actual time=0.000..9.728 rows=111308\n> loops=62904)\n>\n> Whereas, (by altering various statistics to be very wrong) the entire\n> query runs in 21 seconds.\n>\n> There have been several debates about how to address situations like this\n> where\n> no practical non-query-specific statistics-gathering scheme can ever hope\n> to\n> gather enough statistics to model the later derived tables. E.g. the\n> frowned-on\n> SELECTIVITY clause and ideas for query-specific statistics.\n>\n> Meanwhile, I have one other suggestion aimed specifically at problematic\n> CTEs:\n> Would it be reasonable to provide a new Planner Configuration option :\n>\n> enable_nestloop_cte_inner (boolean)\n> Enables or disables the query planner's use of nested-loop join plans in\n> which a CTE is the inner.\n> It is impossible to suppress such nested-loop joins entirely,\n> but turning this variable off discourages the planner from using one\n> if there are other methods available, such as sorting the CTE for\n> merge-join\n> or hashing it for hash-join.\n> The default is on.\n>\n> John\n>\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nSince cte is already an optimization fence, you can go further and make it temporary table. \nCreate table;analyze;select should make optimizer's work much easier.\n18 лют. 2013 18:45, \"John Lumby\" <[email protected]> напис.\n\nOn 2012-10-09 23:09:21\nTom Lane wrote:\n >\n\n\n> re subject Why am I getting great/terrible estimates with these CTE queries?\n > You're assuming the case where the estimate is better is better for a\n> reason ... but it's only better as a result of blind dumb luck. The\n> outer-level query planner doesn't know anything about the CTE's output\n> except the estimated number of rows --- in particular, it doesn't drill\n> down to find any statistics about the join column.\n>\n\nI am also struggling with a problem involving CTEs although in my case\nit is caused by huge *under*-estimation of cardinality rather then *over*-estimation.\nThe statement is quite complex and the problem arises because there is a chain of\nRECURSIVE CTEs each defined as a query involving an earlier CTE and more tables.\nEventually there is no hope for making a good cardinality estimate.\n\nOne CTE in particular has a cardinality estimate of 1 (I guess the actual\nestimate is nearer zero and rounded up) but actual count is over 100000.\nThe planner puts this CTE as inner of a nested loop accessed by simple linear CTE scan\nand the full query then takes over 20 minutes.\n\n -> Nested Loop (cost=0.00..0.06 rows=1 width=588) (actual time=2340.421..1201593.856 rows=105984 loops=1)\n Join Filter: ((winnum.subnet_id = binoptasc.subnet_id) AND (winnum.option_code = binoptasc.option_code) AND ((winnum.option_discriminator)::text = (binoptasc.option_discriminator)::text) AND (winnum.net_rel_level = binoptasc.net_rel_level))\n\n Rows Removed by Join Filter: 7001612448\n Buffers: shared hit=2290941\n -> CTE Scan on winning_option_nums winnum (cost=0.00..0.02 rows=1 width=536) (actual time=2338.422..2543.684 rows=62904 loops=1)\n Buffers: shared hit=2290941\n -> CTE Scan on subnet_inhrt_options_asc binoptasc (cost=0.00..0.02 rows=1 width=584) (actual time=0.000..9.728 rows=111308 loops=62904)\n\nWhereas, (by altering various statistics to be very wrong) the entire query runs in 21 seconds.\n\nThere have been several debates about how to address situations like this where\nno practical non-query-specific statistics-gathering scheme can ever hope to\ngather enough statistics to model the later derived tables. E.g. the frowned-on\nSELECTIVITY clause and ideas for query-specific statistics.\n\nMeanwhile, I have one other suggestion aimed specifically at problematic CTEs:\nWould it be reasonable to provide a new Planner Configuration option :\n\n enable_nestloop_cte_inner (boolean)\n Enables or disables the query planner's use of nested-loop join plans in which a CTE is the inner.\n It is impossible to suppress such nested-loop joins entirely,\n but turning this variable off discourages the planner from using one\n if there are other methods available, such as sorting the CTE for merge-join\n or hashing it for hash-join.\n The default is on.\n\nJohn\n\n\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 18 Feb 2013 22:40:37 +0200",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query plans caused by under-estimation of CTE cardinality"
},
{
"msg_contents": "\nVitalii wrote\n> \n> Since cte is already an optimization fence, you can go further and make \n> it temporary table. \n> Create table;analyze;select should make optimizer's work much easier. \n> \nThanks Vitalii - yes, you are right, and I have used that technique on other cases like this.\n\nHowever, for this one, the entire query must be executed as a single query in order\nthat it is based on a consistent snapshot (in the Multiversion Concurrency Control sense)\nof the base table data. Using the temp table technique would allow a commit to occur\nwhich would be invisible to the part of the query which would build the temp\nbut visible to the remaining part of the query. I know I could set Repeatable Read\nfor the transaction to ensure the consistency but that causes other concurrency problems\nas this query is part of a fairly long-running transaction. I really just want this one\nquery to avoid \"dangerous\" plans (meaning relying too much on an estimate of cardinality\nof ~ 1 being really correct).\n\nI also forgot to show the fragment of \"good\" plan (from corrupting the statistics).\nIt demonstrates how effective the hash join is in comparison - \n20 minutes reduced down to 1 second for this join.\n\n -> Hash Join (cost=0.80..1.51 rows=1 width=588) (actual time=1227.517..1693.792 rows=105984 loops=1)\n Hash Cond: ((winnum.subnet_id = binoptasc.subnet_id) AND (winnum.option_code = binoptasc.option_code) AND ((winnum.option_discriminator)::text = (binoptasc.option_discriminator)::text) AND (winnum.net_rel_level = binoptasc.net_rel_level))\n Buffers: shared hit=386485 read=364\n -> CTE Scan on winning_option_nums winnum (cost=0.00..0.40 rows=20 width=536) (actual time=1174.558..1222.542 rows=62904 loops=1)\n Buffers: shared hit=386485 read=364\n -> Hash (cost=0.40..0.40 rows=20 width=584) (actual time=52.933..52.933 rows=111308 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 8644kB\n -> CTE Scan on subnet_inhrt_options_asc binoptasc (cost=0.00..0.40 rows=20 width=584) (actual time=0.001..21.651 rows=111308 loops=1)\n\n\nJohn\n\n \t\t \t \t\t \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Feb 2013 18:26:23 -0500",
"msg_from": "John Lumby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow query plans caused by under-estimation of CTE\n cardinality"
},
{
"msg_contents": "John Lumby <[email protected]> writes:\n> Meanwhile,�� I have one other suggestion aimed specifically at problematic CTEs:\n> Would it be reasonable to provide a new Planner Configuration option� :\n\n> � enable_nestloop_cte_inner (boolean)\n> � Enables or disables the query planner's use of nested-loop join plans in which a CTE is the inner.\n\nSounds pretty badly thought out to me. There might be some cases where\nthis would help, but there would be many more where it would be useless\nor counterproductive.\n\nThe case that was discussed in the previous thread looked like it could\nbe addressed by teaching the planner to drill down into CTEs to find\nvariable referents, as it already does for subquery RTEs (cf\nexamine_simple_variable in selfuncs.c). I'm not sure if your case is\nsimilar or not --- you didn't provide any useful amount of detail.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 05:09:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query plans caused by under-estimation of CTE cardinality"
}
] |
[
{
"msg_contents": "Hi All,\n\nHope someone can help me a little bit here:\n\nI've got a query like the following:\n--\nselect Column1, Column2, Column3\nfrom Table1\nwhere exists (select 1 from Table2 where Table2.ForeignKey =\nTable1.PrimaryKey)\nor exists (select 1 from Table3 where Table3.ForeignKey = Table1.PrimaryKey)\n--\n\nLooking at the query plan it is doing a sequential scan on both Table2\nand Table3.\n\nIf I remove one of the subqueries and turn the query into:\n--\nselect Column1, Column2, Column3\nfrom Table1\nwhere exists (select 1 from Table2 where Table2.ForeignKey =\nTable1.PrimaryKey)\n--\n\nIt is nicely doing an index scan on the index that is on Table2.ForeignKey.\n\nAs Table2 and Table3 are rather large the first query takes minutes\nwhile the second query takes 18ms.\n\nIs there a way to speed this up or an alternative way of selecting\nrecords from Table1 which have related records in Table2 or Table3 which\nis faster?\n\nKindest Regards,\n\nBastiaan Olij\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 17:34:56 +1100",
"msg_from": "Bastiaan Olij <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speed of exist"
},
{
"msg_contents": "Limit the sub-queries to 1, i.e. :\n\nselect 1 from Table2 where Table2.ForeignKey = Table1.PrimaryKey fetch first 1 rows only\n\nAndy.\n\nOn 19.02.2013 07:34, Bastiaan Olij wrote:\n> Hi All,\n>\n> Hope someone can help me a little bit here:\n>\n> I've got a query like the following:\n> --\n> select Column1, Column2, Column3\n> from Table1\n> where exists (select 1 from Table2 where Table2.ForeignKey =\n> Table1.PrimaryKey)\n> or exists (select 1 from Table3 where Table3.ForeignKey = Table1.PrimaryKey)\n> --\n>\n> Looking at the query plan it is doing a sequential scan on both Table2\n> and Table3.\n>\n> If I remove one of the subqueries and turn the query into:\n> --\n> select Column1, Column2, Column3\n> from Table1\n> where exists (select 1 from Table2 where Table2.ForeignKey =\n> Table1.PrimaryKey)\n> --\n>\n> It is nicely doing an index scan on the index that is on Table2.ForeignKey.\n>\n> As Table2 and Table3 are rather large the first query takes minutes\n> while the second query takes 18ms.\n>\n> Is there a way to speed this up or an alternative way of selecting\n> records from Table1 which have related records in Table2 or Table3 which\n> is faster?\n>\n> Kindest Regards,\n>\n> Bastiaan Olij\n>\n>\n>\n\n-- \n------------------------------------------------------------------------------------------------------------------------\n\n*Andy Gumbrecht*\nResearch & Development\nOrpro Vision GmbH\nHefehof 24, 31785, Hameln\n\n+49 (0) 5151 809 44 21\n+49 (0) 1704 305 671\[email protected]\nwww.orprovision.com\n\n\n\n Orpro Vision GmbH\n Sitz der Gesellschaft: 31785, Hameln\n USt-Id-Nr: DE264453214\n Amtsgericht Hannover HRB204336\n Geschaeftsfuehrer: Roberto Gatti, Massimo Gatti, Adam Shaw\n\n------------------------------------------------------------------------------------------------------------------------\n\n\n Diese E-Mail enth�lt vertrauliche und/oder rechtlich gesch�tzte Informationen. Wenn Sie nicht der richtige\n Adressat sind oder diese E-Mail irrt�mlich erhalten haben, informieren Sie bitte sofort den Absender und\n vernichten Sie diese Mail. Das unerlaubte Kopieren, jegliche anderweitige Verwendung sowie die unbefugte\n Weitergabe dieser Mail ist nicht gestattet.\n\n------------------------------------------------------------------------------------------------------------------------\n\n\n This e-mail may contain confidential and/or privileged information. If you are not the intended recipient\n (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any\n unauthorized copying, disclosure, distribution or other use of the material or parts thereof is strictly\n forbidden.\n\n------------------------------------------------------------------------------------------------------------------------\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 08:31:02 +0100",
"msg_from": "Andy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed of exist"
},
{
"msg_contents": "Hi Andy,\n\nI've tried that with the same result. One subquery works beautifully,\ntwo subqueries with an OR and it starts to do a sequential scan...\n\nThanks,\n\nBastiaan Olij\n\nOn 19/02/13 6:31 PM, Andy wrote:\n> Limit the sub-queries to 1, i.e. :\n>\n> select 1 from Table2 where Table2.ForeignKey = Table1.PrimaryKey fetch\n> first 1 rows only\n>\n> Andy.\n>\n> On 19.02.2013 07:34, Bastiaan Olij wrote:\n>> Hi All,\n>>\n>> Hope someone can help me a little bit here:\n>>\n>> I've got a query like the following:\n>> -- \n>> select Column1, Column2, Column3\n>> from Table1\n>> where exists (select 1 from Table2 where Table2.ForeignKey =\n>> Table1.PrimaryKey)\n>> or exists (select 1 from Table3 where Table3.ForeignKey =\n>> Table1.PrimaryKey)\n>> -- \n>>\n>> Looking at the query plan it is doing a sequential scan on both Table2\n>> and Table3.\n>>\n>> If I remove one of the subqueries and turn the query into:\n>> -- \n>> select Column1, Column2, Column3\n>> from Table1\n>> where exists (select 1 from Table2 where Table2.ForeignKey =\n>> Table1.PrimaryKey)\n>> -- \n>>\n>> It is nicely doing an index scan on the index that is on\n>> Table2.ForeignKey.\n>>\n>> As Table2 and Table3 are rather large the first query takes minutes\n>> while the second query takes 18ms.\n>>\n>> Is there a way to speed this up or an alternative way of selecting\n>> records from Table1 which have related records in Table2 or Table3 which\n>> is faster?\n>>\n>> Kindest Regards,\n>>\n>> Bastiaan Olij\n>>\n>>\n>>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 18:36:48 +1100",
"msg_from": "Bastiaan Olij <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speed of exist"
},
{
"msg_contents": "2013/2/19 Bastiaan Olij <[email protected]>:\n> Hi Andy,\n>\n> I've tried that with the same result. One subquery works beautifully,\n> two subqueries with an OR and it starts to do a sequential scan...\n\ntry to rewrite OR to two SELECTs joined by UNION ALL\n\nPavel\n\n>\n> Thanks,\n>\n> Bastiaan Olij\n>\n> On 19/02/13 6:31 PM, Andy wrote:\n>> Limit the sub-queries to 1, i.e. :\n>>\n>> select 1 from Table2 where Table2.ForeignKey = Table1.PrimaryKey fetch\n>> first 1 rows only\n>>\n>> Andy.\n>>\n>> On 19.02.2013 07:34, Bastiaan Olij wrote:\n>>> Hi All,\n>>>\n>>> Hope someone can help me a little bit here:\n>>>\n>>> I've got a query like the following:\n>>> --\n>>> select Column1, Column2, Column3\n>>> from Table1\n>>> where exists (select 1 from Table2 where Table2.ForeignKey =\n>>> Table1.PrimaryKey)\n>>> or exists (select 1 from Table3 where Table3.ForeignKey =\n>>> Table1.PrimaryKey)\n>>> --\n>>>\n>>> Looking at the query plan it is doing a sequential scan on both Table2\n>>> and Table3.\n>>>\n>>> If I remove one of the subqueries and turn the query into:\n>>> --\n>>> select Column1, Column2, Column3\n>>> from Table1\n>>> where exists (select 1 from Table2 where Table2.ForeignKey =\n>>> Table1.PrimaryKey)\n>>> --\n>>>\n>>> It is nicely doing an index scan on the index that is on\n>>> Table2.ForeignKey.\n>>>\n>>> As Table2 and Table3 are rather large the first query takes minutes\n>>> while the second query takes 18ms.\n>>>\n>>> Is there a way to speed this up or an alternative way of selecting\n>>> records from Table1 which have related records in Table2 or Table3 which\n>>> is faster?\n>>>\n>>> Kindest Regards,\n>>>\n>>> Bastiaan Olij\n>>>\n>>>\n>>>\n>>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 08:39:31 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed of exist"
},
{
"msg_contents": "Hi Pavel,\n\nThat is what I've done in this particular case but there are parts where\nI use exist checks in this way that are very cumbersome to write out\nlike that so I'm hoping there is a way to make the optimizer work with\nexistence checks in this way.\n\nCheers,\n\nBastiaan Olij\n\nOn 19/02/13 6:39 PM, Pavel Stehule wrote:\n> 2013/2/19 Bastiaan Olij <[email protected]>:\n>> Hi Andy,\n>>\n>> I've tried that with the same result. One subquery works beautifully,\n>> two subqueries with an OR and it starts to do a sequential scan...\n> try to rewrite OR to two SELECTs joined by UNION ALL\n>\n> Pavel\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 19 Feb 2013 21:18:55 +1100",
"msg_from": "Bastiaan Olij <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speed of exist"
}
] |
[
{
"msg_contents": "Hello,\n\nThe Query Plan for the query below shows a large number in its actual rows\ncount by an unknown reason. As a result Merge Join works on a large enough\ndata to slow down the query.\n\nThe table which I query has the following description:\n\n\n Table\n\"public.qor_value\"\n Column | Type |\nModifiers | Storage | Description\n-------------+------------------------+---------------------------------------------------------------------+----------+-------------\n value_id | integer | not null default\nnextval('qor_value_denorm_value_id_seq'::regclass) | plain |\n run_id | integer | not\nnull | plain\n|\n dft_id | integer | not\nnull | plain\n|\n stat_id | integer | not\nnull | plain\n|\n key | character varying(128)\n| |\nextended |\n value | numeric(22,10)\n| |\nmain |\n line_number | integer | not null default\nnextval('qor_value_line_numbering'::regclass) | plain |\n file_number | integer | not\nnull | plain\n|\nIndexes:\n \"qor_value_cluster\" btree (run_id, stat_id) CLUSTER INVALID\n \"qor_value_filtered_self_join\" btree (run_id, stat_id, key, dft_id,\nline_number) INVALID\n \"qor_value_self_join\" btree (run_id, stat_id, dft_id, key, line_number)\n\nHere is the query:\n\nEXPLAIN ANALYZE\nSELECT *\nFROM \"qor_value\" V1\nINNER JOIN \"qor_value\" V2\nUSING (\"dft_id\", \"stat_id\", \"key\")\nWHERE\nV1.\"stat_id\" = 342 AND\nV1.\"run_id\" = 60807 AND\nV2.\"run_id\" = 60875;\n\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..2513.96 rows=1 width=72) (actual\ntime=127.361..473.687 rows=66460 loops=1)\n Merge Cond: ((v1.dft_id = v2.dft_id) AND ((v1.key)::text =\n(v2.key)::text))\n -> Index Scan using qor_value_self_join on qor_value v1\n(cost=0.00..1255.60 rows=275 width=51) (actual time=89.549..97.045\nrows=1388 loops=1)\n Index Cond: ((run_id = 60807) AND (stat_id = 342))\n -> Index Scan using qor_value_self_join on qor_value v2\n(cost=0.00..1255.60 rows=275 width=51) (actual time=37.796..134.286\nrows=66343 loops=1)\n Index Cond: ((run_id = 60875) AND (stat_id = 342))\n Total runtime: 544.646 ms\n(7 rows)\n\nNote that the second Index Scan has 66343 rows in place of 1388. Here is\nthe query which proves that:\n\nSELECT COUNT(*) FROM \"qor_value\" WHERE run_id = 60875 AND stat_id = 342;\n count\n-------\n 1388\n\nPlease help me to figure out where the problem is.\nThanks in advance,\nVahe\n\nHello,The Query Plan for the query below shows a large number in its actual rows count by an unknown reason. As a result Merge Join works on a large enough data to slow down the query.\nThe table which I query has the following description: Table \"public.qor_value\" Column | Type | Modifiers | Storage | Description \r\n\r\n-------------+------------------------+---------------------------------------------------------------------+----------+------------- value_id | integer | not null default nextval('qor_value_denorm_value_id_seq'::regclass) | plain | \r\n\r\n run_id | integer | not null | plain | dft_id | integer | not null | plain | \r\n\r\n stat_id | integer | not null | plain | key | character varying(128) | | extended | \r\n\r\n value | numeric(22,10) | | main | line_number | integer | not null default nextval('qor_value_line_numbering'::regclass) | plain | \r\n\r\n file_number | integer | not null | plain | Indexes: \"qor_value_cluster\" btree (run_id, stat_id) CLUSTER INVALID \"qor_value_filtered_self_join\" btree (run_id, stat_id, key, dft_id, line_number) INVALID\r\n\r\n \"qor_value_self_join\" btree (run_id, stat_id, dft_id, key, line_number)Here is the query:EXPLAIN ANALYZESELECT *FROM \"qor_value\" V1\r\n\r\nINNER JOIN \"qor_value\" V2USING (\"dft_id\", \"stat_id\", \"key\")WHEREV1.\"stat_id\" = 342 ANDV1.\"run_id\" = 60807 AND V2.\"run_id\" = 60875;\n QUERY PLAN -----------------------------------------------------------------------------------------------------------------------------------------------------\r\n\r\n Merge Join (cost=0.00..2513.96 rows=1 width=72) (actual time=127.361..473.687 rows=66460 loops=1) Merge Cond: ((v1.dft_id = v2.dft_id) AND ((v1.key)::text = (v2.key)::text)) -> Index Scan using qor_value_self_join on qor_value v1 (cost=0.00..1255.60 rows=275 width=51) (actual time=89.549..97.045 rows=1388 loops=1)\r\n\r\n Index Cond: ((run_id = 60807) AND (stat_id = 342)) -> Index Scan using qor_value_self_join on qor_value v2 (cost=0.00..1255.60 rows=275 width=51) (actual time=37.796..134.286 rows=66343 loops=1) Index Cond: ((run_id = 60875) AND (stat_id = 342))\r\n\r\n Total runtime: 544.646 ms(7 rows)Note that the second Index Scan has 66343 rows in place of 1388. Here is the query which proves that:SELECT COUNT(*) FROM \"qor_value\" WHERE run_id = 60875 AND stat_id = 342;\r\n\r\n count ------- 1388Please help me to figure out where the problem is.Thanks in advance,Vahe",
"msg_date": "Thu, 21 Feb 2013 11:24:27 +0400",
"msg_from": "Vahe Evoyan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong actual number of rows in the Query Plan"
},
{
"msg_contents": "Vahe Evoyan <[email protected]> writes:\n> Merge Join (cost=0.00..2513.96 rows=1 width=72) (actual\n> time=127.361..473.687 rows=66460 loops=1)\n> Merge Cond: ((v1.dft_id = v2.dft_id) AND ((v1.key)::text =\n> (v2.key)::text))\n> -> Index Scan using qor_value_self_join on qor_value v1\n> (cost=0.00..1255.60 rows=275 width=51) (actual time=89.549..97.045\n> rows=1388 loops=1)\n> Index Cond: ((run_id = 60807) AND (stat_id = 342))\n> -> Index Scan using qor_value_self_join on qor_value v2\n> (cost=0.00..1255.60 rows=275 width=51) (actual time=37.796..134.286\n> rows=66343 loops=1)\n> Index Cond: ((run_id = 60875) AND (stat_id = 342))\n> Total runtime: 544.646 ms\n> (7 rows)\n\n> Note that the second Index Scan has 66343 rows in place of 1388.\n\nThat's not a bug. That's a result of rescanning portions of the inner\nrelation's output due to duplicate mergejoin keys in the outer relation.\nThe EXPLAIN ANALYZE machinery counts the re-fetches as if they were new\nrows, though in some sense they're not.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Feb 2013 12:58:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong actual number of rows in the Query Plan"
}
] |
[
{
"msg_contents": "Hardware: IBM X3650 M3 (2 x Xeon X5680 6C 3.33GHz), 96GB RAM. IBM X3524\nwith RAID 10 ext4 (noatime,nodiratime,data=writeback,barrier=0) volumes for\npg_xlog / data / indexes.\n\nSoftware: SLES 11 SP2 3.0.58-0.6.2-default x86_64, PostgreSQL 9.0.4.\nmax_connections = 1500\nshared_buffers = 16GB\nwork_mem = 64MB\nmaintenance_work_mem = 256MB\nwal_level = archive\nsynchronous_commit = off\nwal_buffers = 16MB\ncheckpoint_segments = 32\ncheckpoint_completion_target = 0.9\neffective_cache_size = 32GB\nWorkload: OLTP, typically with 500+ concurrent database connections, same\nLinux instance is also used as web server and application server. Far from\nideal but has worked well for 15 months.\n\nProblem: We have been running PostgreSQL 9.0.4 on SLES11 SP1, last kernel\nin use was 2.6.32-43-0.4, performance has always been great. Since updating\nfrom SLES11 SP1 to SP2 we now experience many database 'stalls' (e.g.\nnormally 'instant' queries taking many seconds, any query will be slow,\njust connecting to the database will be slow). We have trialled PostgreSQL\n9.2.3 under SLES11 SP2 with the exact same results. During these periods\nthe machine is completely responsive but anything accessing the database is\nextremely slow.\n\nI have tried increasing sched_migration_cost from 500000 to 5000000 and\nalso tried setting sched_compat_yield to 1, neither of these appeared to\nmake a difference. I don't have the parameter 'sched_autogroup_enabled'.\nNothing jumps out from top/iostat/sar/pg_stat_activity however I am very\nfar from expert in interpreting their output\n\nWe have work underway to reduce our number of connections as although it\nhas always worked ok, perhaps it makes us particularly vulnerable to\nkernel/scheduler changes.\n\nI would be very grateful for any suggestions as to the best way to diagnose\nthe source of this problem and/or general recommendations?\n\nHardware: IBM X3650 M3 (2 x Xeon X5680 6C 3.33GHz), 96GB RAM. IBM X3524 with RAID 10 ext4 (noatime,nodiratime,data=writeback,barrier=0) volumes for pg_xlog / data / indexes.\n \nSoftware: SLES 11 SP2 3.0.58-0.6.2-default x86_64, PostgreSQL 9.0.4.\nmax_connections = 1500shared_buffers = 16GBwork_mem = 64MBmaintenance_work_mem = 256MBwal_level = archivesynchronous_commit = offwal_buffers = 16MBcheckpoint_segments = 32checkpoint_completion_target = 0.9\neffective_cache_size = 32GB\nWorkload: OLTP, typically with 500+ concurrent database connections, same Linux instance is also used as web server and application server. Far from ideal but has worked well for 15 months.\nProblem: We have been running PostgreSQL 9.0.4 on SLES11 SP1, last kernel in use was 2.6.32-43-0.4, performance has always been great. Since updating from SLES11 SP1 to SP2 we now experience many database 'stalls' (e.g. normally 'instant' queries taking many seconds, any query will be slow, just connecting to the database will be slow). We have trialled PostgreSQL 9.2.3 under SLES11 SP2 with the exact same results. During these periods the machine is completely responsive but anything accessing the database is extremely slow.\nI have tried increasing sched_migration_cost from 500000 to 5000000 and also tried setting sched_compat_yield to 1, neither of these appeared to make a difference. I don't have the parameter 'sched_autogroup_enabled'. Nothing jumps out from top/iostat/sar/pg_stat_activity however I am very far from expert in interpreting their output\n We have work underway to reduce our number of connections as although it has always worked ok, perhaps it makes us particularly vulnerable to kernel/scheduler changes. I would be very grateful for any suggestions as to the best way to diagnose the source of this problem and/or general recommendations?",
"msg_date": "Thu, 21 Feb 2013 09:59:01 +0000",
"msg_from": "Mark Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor performance after update from SLES11 SP1 to SP2"
},
{
"msg_contents": "Hello Mark,\nif i was you i would start with making basic benchmarks regarding IO\nperformance (bonnie++ maybe?) and i would check my sysctl.conf. your\nparameters look ok to me.\nwhen these delays occur have you noticed whats causing it ? An output of\nvmstat when the delays are happening would also help.\n\n\nVasilis Ventirozos\n\nOn Thu, Feb 21, 2013 at 11:59 AM, Mark Smith <[email protected]> wrote:\n\n> Hardware: IBM X3650 M3 (2 x Xeon X5680 6C 3.33GHz), 96GB RAM. IBM X3524\n> with RAID 10 ext4 (noatime,nodiratime,data=writeback,barrier=0) volumes for\n> pg_xlog / data / indexes.\n>\n> Software: SLES 11 SP2 3.0.58-0.6.2-default x86_64, PostgreSQL 9.0.4.\n> max_connections = 1500\n> shared_buffers = 16GB\n> work_mem = 64MB\n> maintenance_work_mem = 256MB\n> wal_level = archive\n> synchronous_commit = off\n> wal_buffers = 16MB\n> checkpoint_segments = 32\n> checkpoint_completion_target = 0.9\n> effective_cache_size = 32GB\n> Workload: OLTP, typically with 500+ concurrent database connections, same\n> Linux instance is also used as web server and application server. Far from\n> ideal but has worked well for 15 months.\n>\n> Problem: We have been running PostgreSQL 9.0.4 on SLES11 SP1, last kernel\n> in use was 2.6.32-43-0.4, performance has always been great. Since updating\n> from SLES11 SP1 to SP2 we now experience many database 'stalls' (e.g.\n> normally 'instant' queries taking many seconds, any query will be slow,\n> just connecting to the database will be slow). We have trialled PostgreSQL\n> 9.2.3 under SLES11 SP2 with the exact same results. During these periods\n> the machine is completely responsive but anything accessing the database is\n> extremely slow.\n>\n> I have tried increasing sched_migration_cost from 500000 to 5000000 and\n> also tried setting sched_compat_yield to 1, neither of these appeared to\n> make a difference. I don't have the parameter 'sched_autogroup_enabled'.\n> Nothing jumps out from top/iostat/sar/pg_stat_activity however I am very\n> far from expert in interpreting their output\n>\n> We have work underway to reduce our number of connections as although it\n> has always worked ok, perhaps it makes us particularly vulnerable to\n> kernel/scheduler changes.\n>\n> I would be very grateful for any suggestions as to the best way to\n> diagnose the source of this problem and/or general recommendations?\n>\n\nHello Mark,if i was you i would start with making basic benchmarks regarding IO performance (bonnie++ maybe?) and i would check my sysctl.conf. your parameters look ok to me.when these delays occur have you noticed whats causing it ? An output of vmstat when the delays are happening would also help.\nVasilis Ventirozos On Thu, Feb 21, 2013 at 11:59 AM, Mark Smith <[email protected]> wrote:\nHardware: IBM X3650 M3 (2 x Xeon X5680 6C 3.33GHz), 96GB RAM. IBM X3524 with RAID 10 ext4 (noatime,nodiratime,data=writeback,barrier=0) volumes for pg_xlog / data / indexes.\n \nSoftware: SLES 11 SP2 3.0.58-0.6.2-default x86_64, PostgreSQL 9.0.4.\nmax_connections = 1500shared_buffers = 16GBwork_mem = 64MBmaintenance_work_mem = 256MBwal_level = archivesynchronous_commit = offwal_buffers = 16MBcheckpoint_segments = 32checkpoint_completion_target = 0.9\n\neffective_cache_size = 32GB\nWorkload: OLTP, typically with 500+ concurrent database connections, same Linux instance is also used as web server and application server. Far from ideal but has worked well for 15 months.\nProblem: We have been running PostgreSQL 9.0.4 on SLES11 SP1, last kernel in use was 2.6.32-43-0.4, performance has always been great. Since updating from SLES11 SP1 to SP2 we now experience many database 'stalls' (e.g. normally 'instant' queries taking many seconds, any query will be slow, just connecting to the database will be slow). We have trialled PostgreSQL 9.2.3 under SLES11 SP2 with the exact same results. During these periods the machine is completely responsive but anything accessing the database is extremely slow.\nI have tried increasing sched_migration_cost from 500000 to 5000000 and also tried setting sched_compat_yield to 1, neither of these appeared to make a difference. I don't have the parameter 'sched_autogroup_enabled'. Nothing jumps out from top/iostat/sar/pg_stat_activity however I am very far from expert in interpreting their output\n\n We have work underway to reduce our number of connections as although it has always worked ok, perhaps it makes us particularly vulnerable to kernel/scheduler changes. I would be very grateful for any suggestions as to the best way to diagnose the source of this problem and/or general recommendations?",
"msg_date": "Thu, 21 Feb 2013 12:25:20 +0200",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance after update from SLES11 SP1 to SP2"
},
{
"msg_contents": "There's another thread in the list started by Josh Berkus about poor\nperformance with a similar kernel. I'd look that thread up and see if\nyou and he have enough in common to work together on this.\n\nOn Thu, Feb 21, 2013 at 2:59 AM, Mark Smith <[email protected]> wrote:\n> Hardware: IBM X3650 M3 (2 x Xeon X5680 6C 3.33GHz), 96GB RAM. IBM X3524 with\n> RAID 10 ext4 (noatime,nodiratime,data=writeback,barrier=0) volumes for\n> pg_xlog / data / indexes.\n>\n> Software: SLES 11 SP2 3.0.58-0.6.2-default x86_64, PostgreSQL 9.0.4.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Feb 2013 03:36:07 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance after update from SLES11 SP1 to SP2"
},
{
"msg_contents": "On Thu, Feb 21, 2013 at 1:59 AM, Mark Smith <[email protected]> wrote:\n> Software: SLES 11 SP2 3.0.58-0.6.2-default x86_64, PostgreSQL 9.0.4.\n\n[skipped]\n\n> Problem: We have been running PostgreSQL 9.0.4 on SLES11 SP1, last kernel in\n> use was 2.6.32-43-0.4, performance has always been great. Since updating\n> from SLES11 SP1 to SP2 we now experience many database 'stalls' (e.g.\n> normally 'instant' queries taking many seconds, any query will be slow, just\n> connecting to the database will be slow).\n\nIt reminds me a transparent huge pages defragmentation issue that was\nfound in recent kernels.\n\nTransparent huge pages defragmentation could lead to unpredictable\ndatabase stalls on some Linux kernels. The recommended settings for\nthis are below.\n\ndb1: ~ # echo always > /sys/kernel/mm/transparent_hugepage/enabled\ndb1: ~ # echo madvise > /sys/kernel/mm/transparent_hugepage/defrag\n\nI am collecting recommendations for DB server configuration by the\nlink below. Try to look at it also if the above wont help.\n\nhttp://code.google.com/p/pgcookbook/wiki/Database_Server_Configuration\n\n> We have trialled PostgreSQL 9.2.3\n> under SLES11 SP2 with the exact same results. During these periods the\n> machine is completely responsive but anything accessing the database is\n> extremely slow.\n>\n> I have tried increasing sched_migration_cost from 500000 to 5000000 and also\n> tried setting sched_compat_yield to 1, neither of these appeared to make a\n> difference. I don't have the parameter 'sched_autogroup_enabled'. Nothing\n> jumps out from top/iostat/sar/pg_stat_activity however I am very far from\n> expert in interpreting their output\n>\n> We have work underway to reduce our number of connections as although it has\n> always worked ok, perhaps it makes us particularly vulnerable to\n> kernel/scheduler changes.\n>\n> I would be very grateful for any suggestions as to the best way to diagnose\n> the source of this problem and/or general recommendations?\n\n\n\n--\nSergey Konoplev\nDatabase and Software Architect\nhttp://www.linkedin.com/in/grayhemp\n\nPhones:\nUSA +1 415 867 9984\nRussia, Moscow +7 901 903 0499\nRussia, Krasnodar +7 988 888 1979\n\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Feb 2013 08:23:02 -0800",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance after update from SLES11 SP1 to SP2"
},
{
"msg_contents": "On 21 February 2013 16:23, Sergey Konoplev <[email protected]> wrote:\n\n> On Thu, Feb 21, 2013 at 1:59 AM, Mark Smith <[email protected]>\n> wrote:\n> > Software: SLES 11 SP2 3.0.58-0.6.2-default x86_64, PostgreSQL 9.0.4.\n>\n> [skipped]\n>\n> > Problem: We have been running PostgreSQL 9.0.4 on SLES11 SP1, last\n> kernel in\n> > use was 2.6.32-43-0.4, performance has always been great. Since updating\n> > from SLES11 SP1 to SP2 we now experience many database 'stalls' (e.g.\n> > normally 'instant' queries taking many seconds, any query will be slow,\n> just\n> > connecting to the database will be slow).\n>\n> It reminds me a transparent huge pages defragmentation issue that was\n> found in recent kernels.\n>\n> Transparent huge pages defragmentation could lead to unpredictable\n> database stalls on some Linux kernels. The recommended settings for\n> this are below.\n>\n> db1: ~ # echo always > /sys/kernel/mm/transparent_hugepage/enabled\n> db1: ~ # echo madvise > /sys/kernel/mm/transparent_hugepage/defrag\n>\n> [skipped]\n\n\nSergey - your suggestion to look at transparent huge pages (THP) has\nresolved the issue for us, thank you so much. We had noticed abnormally\nhigh system CPU usage but didn't get much beyond that in our analysis.\n\nWe disabled THP altogether and it was quite simply as if we had turned the\n'poor performance' tap off. Since then we have had no slow queries / stalls\nat all and system CPU is consistently very low. We changed many things\nwhilst trying to resolve this issue but the THP change was done in\nisolation and we can therefore be confident that in our environment,\nleaving THP enabled with the default parameters is a killer.\n\nAt a later point we will experiment with enabling THP with the recommended\nmadvise defrag setting.\n\nThank you to all who responded.\n\nMark\n\nOn 21 February 2013 16:23, Sergey Konoplev <[email protected]> wrote:\n\nOn Thu, Feb 21, 2013 at 1:59 AM, Mark Smith <[email protected]> wrote:\n> Software: SLES 11 SP2 3.0.58-0.6.2-default x86_64, PostgreSQL 9.0.4.[skipped]\n> Problem: We have been running PostgreSQL 9.0.4 on SLES11 SP1, last kernel in> use was 2.6.32-43-0.4, performance has always been great. Since updating> from SLES11 SP1 to SP2 we now experience many database 'stalls' (e.g.\n> normally 'instant' queries taking many seconds, any query will be slow, just> connecting to the database will be slow).It reminds me a transparent huge pages defragmentation issue that was\nfound in recent kernels.Transparent huge pages defragmentation could lead to unpredictabledatabase stalls on some Linux kernels. The recommended settings forthis are below.db1: ~ # echo always > /sys/kernel/mm/transparent_hugepage/enabled\ndb1: ~ # echo madvise > /sys/kernel/mm/transparent_hugepage/defrag[skipped]\n \nSergey - your suggestion to look at transparent huge pages (THP) has resolved the issue for us, thank you so much. We had noticed abnormally high system CPU usage but didn't get much beyond that in our analysis. \n \nWe disabled THP altogether and it was quite simply as if we had turned the 'poor performance' tap off. Since then we have had no slow queries / stalls at all and system CPU is consistently very low. We changed many things whilst trying to resolve this issue but the THP change was done in isolation and we can therefore be confident that in our environment, leaving THP enabled with the default parameters is a killer.\n \nAt a later point we will experiment with enabling THP with the recommended madvise defrag setting.\n \nThank you to all who responded.\n \nMark",
"msg_date": "Tue, 5 Mar 2013 14:13:47 +0000",
"msg_from": "Mark Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance after update from SLES11 SP1 to SP2"
}
] |
[
{
"msg_contents": "(Sorry moderators for any double posts, I keep making subscription errors.\nHopefully this one gets through)\n \nHi speed freaks,\n\nCan anyone tell me why the bitmap heap scan takes so long to start for this\nquery? (SQL and EXPLAIN ANALYZE follows).\n\nThe big culprit in this appears to be:\n-> Bitmap Index Scan on log_2013_01_session_idx (cost=0.00..63186.52\nrows=2947664 width=0) (actual time=32611.918..32611.918 rows=2772042\nloops=1)\"\nIndex Cond: (session_id = 27)\"\n\nI can't see anything that occurs between actual time 0.0..32611.918 that\nthis could be waiting on. Is it building the bitmap?\n\nRunning the query a second time yields this:\n\n-> Bitmap Index Scan on log_2013_01_session_idx (cost=0.00..63186.52\nrows=2947664 width=0) (actual time=2896.601..2896.601 rows=2772042 loops=1)\"\nIndex Cond: (session_id = 27)\"\n\nDoes the bitmap then get cached? These queries are built dynamically and\ncalled rarely, so their first-time performance is important. I'd prefer a\nstrategy that allowed fast performance the first time, rather than slow the\nfirst time and extremely fast subsequently.\n\nThanks,\n\nCarlo\n\nSELECT\n l.session_id,\n l.log_id,\n s.session_type_code,\n coalesce(st.name, '?' || s.session_type_code || '?') AS\nsession_type_name,\n l.input_resource_id,\n ir.impt_schema AS input_resource_table_schema,\n ir.impt_table AS input_resource_table_name,\n ir.resource AS input_resource_name,\n l.input_resource_pkey_id,\n tar_table.table_schema,\n tar_table.table_name,\n l.target_pkey_id AS table_pkey_id,\n tar_op.name AS operation,\n tar_note.name AS note\nFROM mdx_audit.log_2013_01 AS l\nJOIN mdx_audit.session AS s USING (session_id) JOIN mdx_audit.target_table\nAS tar_table USING (target_table_id) JOIN mdx_audit.target_operation_type AS\ntar_op USING (target_operation_type_code) LEFT OUTER JOIN\nmdx_audit.target_note AS tar_note USING (target_note_id) LEFT OUTER JOIN\nmdx_audit.session_type AS st USING (session_type_code) LEFT OUTER JOIN\nmdx_core.input_resource AS ir USING (input_resource_id) WHERE\n l.session_id = 27\n AND \n (\n input_resource_pkey_id IS NULL\n OR input_resource_pkey_id IN (\n 494568472,\n 494568473,\n 494568474,\n 494568475,\n 494568476,\n 494568477,\n 494568478,\n 494568479,\n 494568480,\n 494568481,\n 494568482,\n 494568483,\n 494568484,\n 494568485,\n 494568486,\n 494568487,\n 494568488,\n 494568489,\n 494568490\n )\n )\n\n\"Hash Left Join (cost=63191.88..853169.29 rows=92 width=2199) (actual\ntime=34185.045..44528.710 rows=603 loops=1)\"\n\" Hash Cond: (l.input_resource_id = ir.input_resource_id)\"\n\" -> Hash Left Join (cost=63190.22..853165.68 rows=92 width=1377) (actual\ntime=34184.963..44528.391 rows=603 loops=1)\"\n\" Hash Cond: (l.target_note_id = tar_note.target_note_id)\"\n\" -> Hash Join (cost=63189.07..853164.06 rows=92 width=1161)\n(actual time=34184.872..44528.167 rows=603 loops=1)\"\n\" Hash Cond: (l.target_operation_type_code =\ntar_op.target_operation_type_code)\"\n\" -> Nested Loop (cost=63188.00..853161.72 rows=92\nwidth=1125) (actual time=34184.809..44527.884 rows=603 loops=1)\"\n\" -> Nested Loop Left Join (cost=0.00..9.34 rows=1\nwidth=65) (actual time=12.057..12.068 rows=1 loops=1)\"\n\" Join Filter: (s.session_type_code =\nst.session_type_code)\"\n\" -> Index Scan using session_pkey on session s\n(cost=0.00..8.27 rows=1 width=7) (actual time=6.847..6.850 rows=1 loops=1)\"\n\" Index Cond: (session_id = 27)\"\n\" -> Seq Scan on session_type st (cost=0.00..1.03\nrows=3 width=70) (actual time=5.204..5.207 rows=3 loops=1)\"\n\" -> Hash Join (cost=63188.00..853151.47 rows=92\nwidth=1064) (actual time=34172.746..44515.696 rows=603 loops=1)\"\n\" Hash Cond: (l.target_table_id =\ntar_table.target_table_id)\"\n\" -> Bitmap Heap Scan on log_2013_01 l\n(cost=63186.57..853148.39 rows=194 width=34) (actual\ntime=34172.631..44515.318 rows=603 loops=1)\"\n\" Recheck Cond: (session_id = 27)\"\n\" Filter: ((input_resource_pkey_id IS NULL)\nOR (input_resource_pkey_id = ANY\n('{494568472,494568473,494568474,494568475,494568476,494568477,494568478,494\n568479,494568480,494568481,494568482,494568483,494568484,494568485,494568486\n,494568487,494568488,494568489,494568490}'::bigint[])))\"\n\" -> Bitmap Index Scan on\nlog_2013_01_session_idx (cost=0.00..63186.52 rows=2947664 width=0) (actual\ntime=32611.918..32611.918 rows=2772042 loops=1)\"\n\" Index Cond: (session_id = 27)\"\n\" -> Hash (cost=1.19..1.19 rows=19 width=1034)\n(actual time=0.059..0.059 rows=44 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage:\n4kB\"\n\" -> Seq Scan on target_table tar_table\n(cost=0.00..1.19 rows=19 width=1034) (actual time=0.023..0.037 rows=44\nloops=1)\"\n\" -> Hash (cost=1.03..1.03 rows=3 width=46) (actual\ntime=0.029..0.029 rows=3 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 1kB\"\n\" -> Seq Scan on target_operation_type tar_op\n(cost=0.00..1.03 rows=3 width=46) (actual time=0.024..0.025 rows=3 loops=1)\"\n\" -> Hash (cost=1.07..1.07 rows=7 width=220) (actual\ntime=0.060..0.060 rows=59 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 4kB\"\n\" -> Seq Scan on target_note tar_note (cost=0.00..1.07 rows=7\nwidth=220) (actual time=0.021..0.025 rows=59 loops=1)\"\n\" -> Hash (cost=1.29..1.29 rows=29 width=826) (actual time=0.035..0.035\nrows=33 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 4kB\"\n\" -> Seq Scan on input_resource ir (cost=0.00..1.29 rows=29\nwidth=826) (actual time=0.015..0.025 rows=33 loops=1)\"\n\"Total runtime: 44529.075 ms\"\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Feb 2013 11:57:07 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Are bitmap index scans slow to start?"
},
{
"msg_contents": "On Thu, Feb 21, 2013 at 8:57 AM, Carlo Stonebanks <\[email protected]> wrote:\n\n> (Sorry moderators for any double posts, I keep making subscription errors.\n> Hopefully this one gets through)\n>\n> Hi speed freaks,\n>\n> Can anyone tell me why the bitmap heap scan takes so long to start for this\n> query? (SQL and EXPLAIN ANALYZE follows).\n>\n\nIt is probably reading data from disk. explain (analyze, buffers) would be\nmore helpful, especially if track_io_timing were also turned on. In the\nabsence of that, my thoughts below are just best-guesses.\n\n\n>\n> The big culprit in this appears to be:\n> -> Bitmap Index Scan on log_2013_01_session_idx (cost=0.00..63186.52\n> rows=2947664 width=0) (actual time=32611.918..32611.918 rows=2772042\n> loops=1)\"\n> Index Cond: (session_id = 27)\"\n>\n> I can't see anything that occurs between actual time 0.0..32611.918 that\n> this could be waiting on. Is it building the bitmap?\n>\n\nYes. More importantly, it is reading the index data needed in order to\nbuild the bitmap.\n\n\n>\n> Running the query a second time yields this:\n>\n> -> Bitmap Index Scan on log_2013_01_session_idx (cost=0.00..63186.52\n> rows=2947664 width=0) (actual time=2896.601..2896.601 rows=2772042\n> loops=1)\"\n> Index Cond: (session_id = 27)\"\n>\n> Does the bitmap then get cached?\n\n\nNo, the bitmap itself doesn't get cached. But the data needed to construct\nthe bitmap does get cached. It gets cached by the generic caching methods\nof PG and the OS, not through something specific to bitmaps.\n\n\n> These queries are built dynamically and\n> called rarely, so their first-time performance is important.\n\n\nWhat is going on during the interregnum? Whatever it is, it seems to be\ndriving the log_2013_01_session_idx index out of the cache, but not the\nlog_2013_01 table. (Or perhaps the table visit is getting the benefit of\neffective_io_concurrency?)\n\nRebuilding the index might help, as it would put all the leaf pages holding\nvalues for session_id=27 adjacent to each other, so they would read from\ndisk faster. But with a name like \"session_id\", I don't know how long such\nclustering would last though.\n\nIf I'm right about the index disk-read time, then switching to a plain\nindex scan rather than a bitmap index scan would make no difference--either\nway the data has to come off the disk.\n\n\n\n\n> I'd prefer a\n> strategy that allowed fast performance the first time, rather than slow the\n> first time and extremely fast subsequently.\n>\n\n\nSo would PG, but it can't find such a strategy. PG optimizes all top-level\nqueries in isolation, it never assumes you will execute the same query\nrepeatedly and build that assumption into the costing process. (This is\nnot true of subqueries, where it does account for repeated executions in\nthe cost)\n\nCheers,\n\nJeff\n\nOn Thu, Feb 21, 2013 at 8:57 AM, Carlo Stonebanks <[email protected]> wrote:\n(Sorry moderators for any double posts, I keep making subscription errors.\nHopefully this one gets through)\n\nHi speed freaks,\n\nCan anyone tell me why the bitmap heap scan takes so long to start for this\nquery? (SQL and EXPLAIN ANALYZE follows).It is probably reading data from disk. explain (analyze, buffers) would be more helpful, especially if track_io_timing were also turned on. In the absence of that, my thoughts below are just best-guesses.\n \n\nThe big culprit in this appears to be:\n-> Bitmap Index Scan on log_2013_01_session_idx (cost=0.00..63186.52\nrows=2947664 width=0) (actual time=32611.918..32611.918 rows=2772042\nloops=1)\"\nIndex Cond: (session_id = 27)\"\n\nI can't see anything that occurs between actual time 0.0..32611.918 that\nthis could be waiting on. Is it building the bitmap?Yes. More importantly, it is reading the index data needed in order to build the bitmap. \n\nRunning the query a second time yields this:\n\n-> Bitmap Index Scan on log_2013_01_session_idx (cost=0.00..63186.52\nrows=2947664 width=0) (actual time=2896.601..2896.601 rows=2772042 loops=1)\"\nIndex Cond: (session_id = 27)\"\n\nDoes the bitmap then get cached? No, the bitmap itself doesn't get cached. But the data needed to construct the bitmap does get cached. It gets cached by the generic caching methods of PG and the OS, not through something specific to bitmaps.\n These queries are built dynamically and\ncalled rarely, so their first-time performance is important. What is going on during the interregnum? Whatever it is, it seems to be driving the log_2013_01_session_idx index out of the cache, but not the log_2013_01 table. (Or perhaps the table visit is getting the benefit of effective_io_concurrency?)\nRebuilding the index might help, as it would put all the leaf pages holding values for session_id=27 adjacent to each other, so they would read from disk faster. But with a name like \"session_id\", I don't know how long such clustering would last though.\nIf I'm right about the index disk-read time, then switching to a plain index scan rather than a bitmap index scan would make no difference--either way the data has to come off the disk. \nI'd prefer a\nstrategy that allowed fast performance the first time, rather than slow the\nfirst time and extremely fast subsequently.So would PG, but it can't find such a strategy. PG optimizes all top-level queries in isolation, it never assumes you will execute the same query repeatedly and build that assumption into the costing process. (This is not true of subqueries, where it does account for repeated executions in the cost)\n Cheers,Jeff",
"msg_date": "Thu, 21 Feb 2013 10:19:54 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": ">Rebuilding the index might help, as it would put all the leaf pages holding values for session_id=27 adjacent to each other, so they would read from disk faster. But with a name like >\"session_id\", I don't know how long such clustering would last though.\n\n>If I'm right about the index disk-read time, then switching to a plain index scan rather than a bitmap index scan would make no difference--either way the data has to come off the disk.\n\n\n\n>>I'd prefer a\n>>strategy that allowed fast performance the first time, rather than slow the\n>>first time and extremely fast subsequently.\n\nHello,\n\nif the index is only used to locate rows for single session_id, you may consider split it in a set of partial indexes.\n\ne.g.\ncreate index i_0 on foo where session_id%4 =0;\ncreate index i_1 on foo where session_id%4 =1;\ncreate index i_2 on foo where session_id%4 =2;\ncreate index i_3 on foo where session_id%4 =3;\n\n(can be built in parallel using separate threads)\n\nThen you will have to ensure that all your WHERE clauses also contain the index condition:\n\nWHERE session_id = 27 AND session_id%4 =27%4\n\nregards,\n\nMarc Mamin\n\n\n\n\n\n\n\n\n\n\n\n>Rebuilding the index might help, as it would put all the leaf pages holding values for session_id=27 adjacent to each other, so they would read from disk faster. But with a name like >\"session_id\", I don't know how long such clustering would last though.\n\n>If I'm right about the index disk-read time, then switching to a plain index scan rather than a bitmap index scan would make no difference--either way the data has to come off the disk. \n\n\n\n \n\n>>I'd prefer a\n>>strategy that allowed fast performance the first time, rather than slow the\n>>first time and extremely fast subsequently.\n\n\nHello,\n\nif the index is only used to locate rows for single session_id, you may consider split it in a set of partial indexes.\n\ne.g. \ncreate index i_0 on foo where session_id%4 =0;\ncreate index i_1 on foo where session_id%4 =1;\ncreate index i_2 on foo where session_id%4 =2;\ncreate index i_3 on foo where session_id%4 =3;\n\n(can be built in parallel using separate threads)\n\nThen you will have to ensure that all your WHERE clauses also contain the index condition:\n\nWHERE session_id = 27 AND session_id%4 =27%4\n\nregards,\n\nMarc Mamin",
"msg_date": "Thu, 21 Feb 2013 19:40:59 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "Hi Jeff, thanks for the reply.\n\n \n\n<< \n\nWhat is going on during the interregnum? Whatever it is, it seems to be\ndriving the log_2013_01_session_idx index out of the cache, but not the\nlog_2013_01 table. (Or perhaps the table visit is getting the benefit of\neffective_io_concurrency?)\n.\n\nRebuilding the index might help, as it would put all the leaf pages holding\nvalues for session_id=27 adjacent to each other, so they would read from\ndisk faster. But with a name like \"session_id\", I don't know how long such\nclustering would last though.\n\n>> \n\n \n\nTechnically, nothing should be happening. We used to keep one massive audit\nlog, and was impossible to manage due to its size. We then changed to a\nstrategy where every month a new audit log would be spawned, and since\nlog_2013_01 represents January, the log should be closed and nothing should\nhave changed (it is technically possible that a long-running process would\nspill over into February, but not by this much). So, assuming that it's\nstable, it should be a very good candidate for reindexing, no?\n\n \n\nOur effective_io_concurrency is 1, and last I heard the PG host was a LINUX\n4 drive RAID10, so I don't know if there is any benefit to raising this\nnumber - and if there was any benfit, it would be to the Bitmap Scan, and\nthe problem is the data building before the fact.\n\n \n\n>> the bitmap itself doesn't get cached. But the data needed to construct\nthe bitmap does get cached. It gets cached by the generic caching methods\nof PG and the OS, not through something specific to bitmaps.\n<<\n\n \n\nThis has always been a problem for me. I spend hours trying different\nstrategies and think I've solved the problem, when in fact it seems like a\ncache has spun up, and then something else expires it and the problem is\nback. Is there a way around this problem, can I force the expiration of a\ncache?\n\n \n\nCarlo\n\n \n\n\nHi Jeff, thanks for the reply. << What is going on during the interregnum? Whatever it is, it seems to be driving the log_2013_01_session_idx index out of the cache, but not the log_2013_01 table. (Or perhaps the table visit is getting the benefit of effective_io_concurrency?)…Rebuilding the index might help, as it would put all the leaf pages holding values for session_id=27 adjacent to each other, so they would read from disk faster. But with a name like \"session_id\", I don't know how long such clustering would last though.>> Technically, nothing should be happening. We used to keep one massive audit log, and was impossible to manage due to its size. We then changed to a strategy where every month a new audit log would be spawned, and since log_2013_01 represents January, the log should be closed and nothing should have changed (it is technically possible that a long-running process would spill over into February, but not by this much). So, assuming that it’s stable, it should be a very good candidate for reindexing, no? Our effective_io_concurrency is 1, and last I heard the PG host was a LINUX 4 drive RAID10, so I don’t know if there is any benefit to raising this number – and if there was any benfit, it would be to the Bitmap Scan, and the problem is the data building before the fact. >> the bitmap itself doesn't get cached. But the data needed to construct the bitmap does get cached. It gets cached by the generic caching methods of PG and the OS, not through something specific to bitmaps.<< This has always been a problem for me. I spend hours trying different strategies and think I’ve solved the problem, when in fact it seems like a cache has spun up, and then something else expires it and the problem is back. Is there a way around this problem, can I force the expiration of a cache? Carlo",
"msg_date": "Fri, 22 Feb 2013 12:50:59 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "A cool idea, but if I understand it correctly very specific and fussy. New\nDB's are spawned on this model, and all the developers would have to be\naware of this non-standard behaviour, and DBA\"s would have to create these\nindexes every month, for every DB (as the log tables are created every\nmonth). There are 89 session_id values in the January log (log_2013_01) so\nthis would quickly get out of control. But - like I said - an interesting\nidea for more specific challenges.\n\n \n\nFrom: Marc Mamin [mailto:[email protected]] \nSent: February 21, 2013 2:41 PM\nTo: Jeff Janes; Carlo Stonebanks\nCc: [email protected]\nSubject: AW: [PERFORM] Are bitmap index scans slow to start?\n\n \n\n \n\n>Rebuilding the index might help, as it would put all the leaf pages holding\nvalues for session_id=27 adjacent to each other, so they would read from\ndisk faster. But with a name like >\"session_id\", I don't know how long such\nclustering would last though.\n\n>If I'm right about the index disk-read time, then switching to a plain\nindex scan rather than a bitmap index scan would make no difference--either\nway the data has to come off the disk. \n\n\n \n\n>>I'd prefer a\n>>strategy that allowed fast performance the first time, rather than slow\nthe\n>>first time and extremely fast subsequently.\n\nHello,\n\nif the index is only used to locate rows for single session_id, you may\nconsider split it in a set of partial indexes.\n\ne.g. \ncreate index i_0 on foo where session_id%4 =0;\ncreate index i_1 on foo where session_id%4 =1;\ncreate index i_2 on foo where session_id%4 =2;\ncreate index i_3 on foo where session_id%4 =3;\n\n(can be built in parallel using separate threads)\n\nThen you will have to ensure that all your WHERE clauses also contain the\nindex condition:\n\nWHERE session_id = 27 AND session_id%4 =27%4\n\nregards,\n\nMarc Mamin\n\n\nA cool idea, but if I understand it correctly very specific and fussy. New DB’s are spawned on this model, and all the developers would have to be aware of this non-standard behaviour, and DBA”s would have to create these indexes every month, for every DB (as the log tables are created every month). There are 89 session_id values in the January log (log_2013_01) so this would quickly get out of control. But – like I said – an interesting idea for more specific challenges. From: Marc Mamin [mailto:[email protected]] Sent: February 21, 2013 2:41 PMTo: Jeff Janes; Carlo StonebanksCc: [email protected]: AW: [PERFORM] Are bitmap index scans slow to start? >Rebuilding the index might help, as it would put all the leaf pages holding values for session_id=27 adjacent to each other, so they would read from disk faster. But with a name like >\"session_id\", I don't know how long such clustering would last though.>If I'm right about the index disk-read time, then switching to a plain index scan rather than a bitmap index scan would make no difference--either way the data has to come off the disk. >>I'd prefer a>>strategy that allowed fast performance the first time, rather than slow the>>first time and extremely fast subsequently.Hello,if the index is only used to locate rows for single session_id, you may consider split it in a set of partial indexes.e.g. create index i_0 on foo where session_id%4 =0;create index i_1 on foo where session_id%4 =1;create index i_2 on foo where session_id%4 =2;create index i_3 on foo where session_id%4 =3;(can be built in parallel using separate threads)Then you will have to ensure that all your WHERE clauses also contain the index condition:WHERE session_id = 27 AND session_id%4 =27%4regards,Marc Mamin",
"msg_date": "Fri, 22 Feb 2013 12:50:59 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "I can't really help, but I can make it more clear why postgres is choosing\na _bitmap_ index scan rather than a regular index scan. With a regular\nindex scan it pumps the index for the locations of the rows that it points\nto and loads those rows as it finds them. This works great if the rows in\nthe index are sorta sorted - that way it isn't jumping around the table\nrandomly. Random io is slow. In a bitmap index scan pg pumps the index\nand buffers the by shoving them in a big bitmap. Then, it walks the bitmap\nin order to produce in order io. PG makes the choice based on a measure of\nthe index's correlation.\n\nThe problem comes down to you inserting the sessions concurrently with one\nanother. My instinct would be to lower the FILLFACTOR on newly created\nindecies so they can keep their entries more in order. I'm not sure why I\nhave that instinct but it feels right. Also, you might could try\nclustering newly created tables on session_id and setting the fillfactor\ndown so rows with the same session id will stick together on disk.\n\nNow that I look stuff up on the internet I'm not sure where I saw that pg\ntries to maintain a cluster using empty space from FILLFACTOR but I _think_\nit does. I'm not sure what is going on with my google foo today.\n\nNik\n\n\nOn Fri, Feb 22, 2013 at 12:50 PM, Carlo Stonebanks <\[email protected]> wrote:\n\n> A cool idea, but if I understand it correctly very specific and fussy. New\n> DB’s are spawned on this model, and all the developers would have to be\n> aware of this non-standard behaviour, and DBA”s would have to create these\n> indexes every month, for every DB (as the log tables are created every\n> month). There are 89 session_id values in the January log (log_2013_01) so\n> this would quickly get out of control. But – like I said – an interesting\n> idea for more specific challenges.****\n>\n> ** **\n>\n> *From:* Marc Mamin [mailto:[email protected]]\n> *Sent:* February 21, 2013 2:41 PM\n> *To:* Jeff Janes; Carlo Stonebanks\n> *Cc:* [email protected]\n> *Subject:* AW: [PERFORM] Are bitmap index scans slow to start?****\n>\n> ** **\n>\n> ** **\n>\n> >Rebuilding the index might help, as it would put all the leaf pages\n> holding values for session_id=27 adjacent to each other, so they would read\n> from disk faster. But with a name like >\"session_id\", I don't know how\n> long such clustering would last though.\n>\n> >If I'm right about the index disk-read time, then switching to a plain\n> index scan rather than a bitmap index scan would make no difference--either\n> way the data has to come off the disk.\n>\n>\n> ****\n>\n> >>I'd prefer a\n> >>strategy that allowed fast performance the first time, rather than slow\n> the\n> >>first time and extremely fast subsequently.****\n>\n> Hello,\n>\n> if the index is only used to locate rows for single session_id, you may\n> consider split it in a set of partial indexes.\n>\n> e.g.\n> create index i_0 on foo where session_id%4 =0;\n> create index i_1 on foo where session_id%4 =1;\n> create index i_2 on foo where session_id%4 =2;\n> create index i_3 on foo where session_id%4 =3;\n>\n> (can be built in parallel using separate threads)\n>\n> Then you will have to ensure that all your WHERE clauses also contain the\n> index condition:\n>\n> WHERE session_id = 27 AND session_id%4 =27%4\n>\n> regards,\n>\n> Marc Mamin****\n>\n\nI can't really help, but I can make it more clear why postgres is choosing a _bitmap_ index scan rather than a regular index scan. With a regular index scan it pumps the index for the locations of the rows that it points to and loads those rows as it finds them. This works great if the rows in the index are sorta sorted - that way it isn't jumping around the table randomly. Random io is slow. In a bitmap index scan pg pumps the index and buffers the by shoving them in a big bitmap. Then, it walks the bitmap in order to produce in order io. PG makes the choice based on a measure of the index's correlation.\nThe problem comes down to you inserting the sessions concurrently with one another. My instinct would be to lower the FILLFACTOR on newly created indecies so they can keep their entries more in order. I'm not sure why I have that instinct but it feels right. Also, you might could try clustering newly created tables on session_id and setting the fillfactor down so rows with the same session id will stick together on disk.\nNow that I look stuff up on the internet I'm not sure where I saw that pg tries to maintain a cluster using empty space from FILLFACTOR but I _think_ it does. I'm not sure what is going on with my google foo today.\nNikOn Fri, Feb 22, 2013 at 12:50 PM, Carlo Stonebanks <[email protected]> wrote:\nA cool idea, but if I understand it correctly very specific and fussy. New DB’s are spawned on this model, and all the developers would have to be aware of this non-standard behaviour, and DBA”s would have to create these indexes every month, for every DB (as the log tables are created every month). There are 89 session_id values in the January log (log_2013_01) so this would quickly get out of control. But – like I said – an interesting idea for more specific challenges.\n \nFrom: Marc Mamin [mailto:[email protected]] \nSent: February 21, 2013 2:41 PMTo: Jeff Janes; Carlo StonebanksCc: [email protected]: AW: [PERFORM] Are bitmap index scans slow to start?\n \n\n>Rebuilding the index might help, as it would put all the leaf pages holding values for session_id=27 adjacent to each other, so they would read from disk faster. But with a name like >\"session_id\", I don't know how long such clustering would last though.\n>If I'm right about the index disk-read time, then switching to a plain index scan rather than a bitmap index scan would make no difference--either way the data has to come off the disk. \n>>I'd prefer a\n\n>>strategy that allowed fast performance the first time, rather than slow the>>first time and extremely fast subsequently.Hello,\nif the index is only used to locate rows for single session_id, you may consider split it in a set of partial indexes.e.g. create index i_0 on foo where session_id%4 =0;create index i_1 on foo where session_id%4 =1;\n\ncreate index i_2 on foo where session_id%4 =2;create index i_3 on foo where session_id%4 =3;(can be built in parallel using separate threads)Then you will have to ensure that all your WHERE clauses also contain the index condition:\nWHERE session_id = 27 AND session_id%4 =27%4regards,Marc Mamin",
"msg_date": "Fri, 22 Feb 2013 14:05:02 -0500",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": ">> Also, you might could try clustering newly created tables on session_id\nand setting the fillfactor down so rows with the same session id will stick\ntogether on disk.\n\n<< \n\n \n\nMy understanding of PG's cluster is that this is a one-time command that\ncreates a re-ordered table and doesn't maintain the clustered order until\nthe command is issued again. During the CLUSTER, the table is read and write\nlocked. So, in order for me to use this I would need to set up a timed event\nto CLUSTER occasionally.\n\n \n\n>> I can't really help, but I can make it more clear why postgres is\nchoosing a _bitmap_ index scan rather than a regular index scan\n\n<< \n\n \n\nThe EXPLAIN ANALYZE is showing it is taking a long time to prepare the\nbitmap (i.e.-> Bitmap Index Scan on log_2013_01_session_idx\n(cost=0.00..63186.52\n\nrows=2947664 width=0) (actual time=32611.918..32611.918 rows=2772042\nloops=1)\" Index Cond: (session_id = 27)\" the bitmap scan is actually very\nfast. Jeff sasys that the bitmap is not cached, so I will assume the PG\ngeneral caches being created are of general use. \n\n \n\nI think what I need to do is figure out is:\n\n \n\n1) Why does it take 36 seconds to set up the general index caches?\n\n2) What can I do about it (what stats do I need to look at)?\n\n3) How can I force these caches to expire so I can tell if the strategy\nworked?\n\n \n\n \n\n \n\n \n\nFrom: Nikolas Everett [mailto:[email protected]] \nSent: February 22, 2013 2:05 PM\nTo: Carlo Stonebanks\nCc: Marc Mamin; Jeff Janes; [email protected]\nSubject: Re: [PERFORM] Are bitmap index scans slow to start?\n\n \n\nI can't really help, but I can make it more clear why postgres is choosing a\n_bitmap_ index scan rather than a regular index scan. With a regular index\nscan it pumps the index for the locations of the rows that it points to and\nloads those rows as it finds them. This works great if the rows in the\nindex are sorta sorted - that way it isn't jumping around the table\nrandomly. Random io is slow. In a bitmap index scan pg pumps the index and\nbuffers the by shoving them in a big bitmap. Then, it walks the bitmap in\norder to produce in order io. PG makes the choice based on a measure of the\nindex's correlation.\n\n \n\nThe problem comes down to you inserting the sessions concurrently with one\nanother. My instinct would be to lower the FILLFACTOR on newly created\nindecies so they can keep their entries more in order. I'm not sure why I\nhave that instinct but it feels right. Also, you might could try clustering\nnewly created tables on session_id and setting the fillfactor down so rows\nwith the same session id will stick together on disk.\n\n \n\nNow that I look stuff up on the internet I'm not sure where I saw that pg\ntries to maintain a cluster using empty space from FILLFACTOR but I _think_\nit does. I'm not sure what is going on with my google foo today.\n\n \n\nNik\n\n \n\nOn Fri, Feb 22, 2013 at 12:50 PM, Carlo Stonebanks\n<[email protected]> wrote:\n\nA cool idea, but if I understand it correctly very specific and fussy. New\nDB's are spawned on this model, and all the developers would have to be\naware of this non-standard behaviour, and DBA\"s would have to create these\nindexes every month, for every DB (as the log tables are created every\nmonth). There are 89 session_id values in the January log (log_2013_01) so\nthis would quickly get out of control. But - like I said - an interesting\nidea for more specific challenges.\n\n \n\nFrom: Marc Mamin [mailto:[email protected]] \nSent: February 21, 2013 2:41 PM\nTo: Jeff Janes; Carlo Stonebanks\nCc: [email protected]\nSubject: AW: [PERFORM] Are bitmap index scans slow to start?\n\n \n\n \n\n>Rebuilding the index might help, as it would put all the leaf pages holding\nvalues for session_id=27 adjacent to each other, so they would read from\ndisk faster. But with a name like >\"session_id\", I don't know how long such\nclustering would last though.\n\n>If I'm right about the index disk-read time, then switching to a plain\nindex scan rather than a bitmap index scan would make no difference--either\nway the data has to come off the disk. \n\n\n \n\n>>I'd prefer a\n>>strategy that allowed fast performance the first time, rather than slow\nthe\n>>first time and extremely fast subsequently.\n\nHello,\n\nif the index is only used to locate rows for single session_id, you may\nconsider split it in a set of partial indexes.\n\ne.g. \ncreate index i_0 on foo where session_id%4 =0;\ncreate index i_1 on foo where session_id%4 =1;\ncreate index i_2 on foo where session_id%4 =2;\ncreate index i_3 on foo where session_id%4 =3;\n\n(can be built in parallel using separate threads)\n\nThen you will have to ensure that all your WHERE clauses also contain the\nindex condition:\n\nWHERE session_id = 27 AND session_id%4 =27%4\n\nregards,\n\nMarc Mamin\n\n \n\n\n>> Also, you might could try clustering newly created tables on session_id and setting the fillfactor down so rows with the same session id will stick together on disk.<< My understanding of PG’s cluster is that this is a one-time command that creates a re-ordered table and doesn’t maintain the clustered order until the command is issued again. During the CLUSTER, the table is read and write locked. So, in order for me to use this I would need to set up a timed event to CLUSTER occasionally. >> I can't really help, but I can make it more clear why postgres is choosing a _bitmap_ index scan rather than a regular index scan<< The EXPLAIN ANALYZE is showing it is taking a long time to prepare the bitmap (i.e.-> Bitmap Index Scan on log_2013_01_session_idx (cost=0.00..63186.52rows=2947664 width=0) (actual time=32611.918..32611.918 rows=2772042 loops=1)\" Index Cond: (session_id = 27)\" the bitmap scan is actually very fast. Jeff sasys that the bitmap is not cached, so I will assume the PG general caches being created are of general use. I think what I need to do is figure out is: 1) Why does it take 36 seconds to set up the general index caches?2) What can I do about it (what stats do I need to look at)?3) How can I force these caches to expire so I can tell if the strategy worked? From: Nikolas Everett [mailto:[email protected]] Sent: February 22, 2013 2:05 PMTo: Carlo StonebanksCc: Marc Mamin; Jeff Janes; [email protected]: Re: [PERFORM] Are bitmap index scans slow to start? I can't really help, but I can make it more clear why postgres is choosing a _bitmap_ index scan rather than a regular index scan. With a regular index scan it pumps the index for the locations of the rows that it points to and loads those rows as it finds them. This works great if the rows in the index are sorta sorted - that way it isn't jumping around the table randomly. Random io is slow. In a bitmap index scan pg pumps the index and buffers the by shoving them in a big bitmap. Then, it walks the bitmap in order to produce in order io. PG makes the choice based on a measure of the index's correlation. The problem comes down to you inserting the sessions concurrently with one another. My instinct would be to lower the FILLFACTOR on newly created indecies so they can keep their entries more in order. I'm not sure why I have that instinct but it feels right. Also, you might could try clustering newly created tables on session_id and setting the fillfactor down so rows with the same session id will stick together on disk. Now that I look stuff up on the internet I'm not sure where I saw that pg tries to maintain a cluster using empty space from FILLFACTOR but I _think_ it does. I'm not sure what is going on with my google foo today. Nik On Fri, Feb 22, 2013 at 12:50 PM, Carlo Stonebanks <[email protected]> wrote:A cool idea, but if I understand it correctly very specific and fussy. New DB’s are spawned on this model, and all the developers would have to be aware of this non-standard behaviour, and DBA”s would have to create these indexes every month, for every DB (as the log tables are created every month). There are 89 session_id values in the January log (log_2013_01) so this would quickly get out of control. But – like I said – an interesting idea for more specific challenges. From: Marc Mamin [mailto:[email protected]] Sent: February 21, 2013 2:41 PMTo: Jeff Janes; Carlo StonebanksCc: [email protected]: AW: [PERFORM] Are bitmap index scans slow to start? >Rebuilding the index might help, as it would put all the leaf pages holding values for session_id=27 adjacent to each other, so they would read from disk faster. But with a name like >\"session_id\", I don't know how long such clustering would last though.>If I'm right about the index disk-read time, then switching to a plain index scan rather than a bitmap index scan would make no difference--either way the data has to come off the disk. >>I'd prefer a>>strategy that allowed fast performance the first time, rather than slow the>>first time and extremely fast subsequently.Hello,if the index is only used to locate rows for single session_id, you may consider split it in a set of partial indexes.e.g. create index i_0 on foo where session_id%4 =0;create index i_1 on foo where session_id%4 =1;create index i_2 on foo where session_id%4 =2;create index i_3 on foo where session_id%4 =3;(can be built in parallel using separate threads)Then you will have to ensure that all your WHERE clauses also contain the index condition:WHERE session_id = 27 AND session_id%4 =27%4regards,Marc Mamin",
"msg_date": "Fri, 22 Feb 2013 15:05:34 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "On 23/02/13 08:05, Nikolas Everett wrote:\n> I can't really help, but I can make it more clear why postgres is \n> choosing a _bitmap_ index scan rather than a regular index scan. With \n> a regular index scan it pumps the index for the locations of the rows \n> that it points to and loads those rows as it finds them. This works \n> great if the rows in the index are sorta sorted - that way it isn't \n> jumping around the table randomly. Random io is slow. In a bitmap \n> index scan pg pumps the index and buffers the by shoving them in a big \n> bitmap. Then, it walks the bitmap in order to produce in order io. \n> PG makes the choice based on a measure of the index's correlation.\n>\n> The problem comes down to you inserting the sessions concurrently with \n> one another. My instinct would be to lower the FILLFACTOR on newly \n> created indecies so they can keep their entries more in order. I'm \n> not sure why I have that instinct but it feels right. Also, you might \n> could try clustering newly created tables on session_id and setting \n> the fillfactor down so rows with the same session id will stick \n> together on disk.\n>\n> Now that I look stuff up on the internet I'm not sure where I saw that \n> pg tries to maintain a cluster using empty space from FILLFACTOR but I \n> _think_ it does. I'm not sure what is going on with my google foo today.\n>\n> Nik\n>\n>\n> On Fri, Feb 22, 2013 at 12:50 PM, Carlo Stonebanks \n> <[email protected] <mailto:[email protected]>> \n> wrote:\n>\n> A cool idea, but if I understand it correctly very specific and\n> fussy. New DB’s are spawned on this model, and all the developers\n> would have to be aware of this non-standard behaviour, and DBA”s\n> would have to create these indexes every month, for every DB (as\n> the log tables are created every month). There are 89 session_id\n> values in the January log (log_2013_01) so this would quickly get\n> out of control. But – like I said – an interesting idea for more\n> specific challenges.\n>\n> *From:*Marc Mamin [mailto:[email protected]\n> <mailto:[email protected]>]\n> *Sent:* February 21, 2013 2:41 PM\n> *To:* Jeff Janes; Carlo Stonebanks\n> *Cc:* [email protected]\n> <mailto:[email protected]>\n> *Subject:* AW: [PERFORM] Are bitmap index scans slow to start?\n>\n> >Rebuilding the index might help, as it would put all the leaf pages holding\n> values for session_id=27 adjacent to each other, so they would\n> read from disk faster. But with a name like >\"session_id\", I\n> don't know how long such clustering would last though.\n>\n> >If I'm right about the index disk-read time, then switching to a\n> plain index scan rather than a bitmap index scan would make no\n> difference--either way the data has to come off the disk.\n>\n>\n> >>I'd prefer a\n> >>strategy that allowed fast performance the first time,\n> rather than slow the\n> >>first time and extremely fast subsequently.\n>\n> Hello,\n>\n> if the index is only used to locate rows for single session_id,\n> you may consider split it in a set of partial indexes.\n>\n> e.g.\n> create index i_0 on foo where session_id%4 =0;\n> create index i_1 on foo where session_id%4 =1;\n> create index i_2 on foo where session_id%4 =2;\n> create index i_3 on foo where session_id%4 =3;\n>\n> (can be built in parallel using separate threads)\n>\n> Then you will have to ensure that all your WHERE clauses also\n> contain the index condition:\n>\n> WHERE session_id = 27 AND session_id%4 =27%4\n>\n> regards,\n>\n> Marc Mamin\n>\n>\nCould you use CLUSTER on the table after it had been closed off? If \nappropriate, that should make the queries run much faster, as elated \nentries will be inthe same or nearby blocks on disk.\n\n\n\n\n\n\nOn 23/02/13 08:05, Nikolas Everett\n wrote:\n\n\nI can't really help, but I can make it more clear\n why postgres is choosing a _bitmap_ index scan rather than a\n regular index scan. With a regular index scan it pumps the\n index for the locations of the rows that it points to and loads\n those rows as it finds them. This works great if the rows in\n the index are sorta sorted - that way it isn't jumping around\n the table randomly. Random io is slow. In a bitmap index scan\n pg pumps the index and buffers the by shoving them in a big\n bitmap. Then, it walks the bitmap in order to produce in order\n io. PG makes the choice based on a measure of the index's\n correlation.\n \n\n\nThe problem comes down to you inserting the\n sessions concurrently with one another. My instinct would be\n to lower the FILLFACTOR on newly created indecies so they can\n keep their entries more in order. I'm not sure why I have\n that instinct but it feels right. Also, you might could try\n clustering newly created tables on session_id and setting the\n fillfactor down so rows with the same session id will stick\n together on disk.\n\n\nNow that I look stuff up on the internet I'm not\n sure where I saw that pg tries to maintain a cluster using\n empty space from FILLFACTOR but I _think_ it does. I'm not\n sure what is going on with my google foo today.\n\n\nNik\n\n\n\nOn Fri, Feb 22, 2013 at 12:50 PM, Carlo\n Stonebanks <[email protected]>\n wrote:\n\n\n\nA\n cool idea, but if I understand it correctly very\n specific and fussy. New DB’s are spawned on this\n model, and all the developers would have to be aware\n of this non-standard behaviour, and DBA”s would have\n to create these indexes every month, for every DB\n (as the log tables are created every month). There\n are 89 session_id values in the January log\n (log_2013_01) so this would quickly get out of\n control. But – like I said – an interesting idea for\n more specific challenges.\n \n\n\nFrom: Marc Mamin [mailto:[email protected]] \nSent: February 21, 2013 2:41 PM\nTo: Jeff Janes; Carlo Stonebanks\nCc: [email protected]\nSubject: AW: [PERFORM] Are bitmap index\n scans slow to start?\n\n\n \n\n \n\n\n\n\n\n>Rebuilding\n the index might help, as it would put\n all the leaf pages holding values for\n session_id=27 adjacent to each other, so\n they would read from disk faster. But\n with a name like >\"session_id\", I\n don't know how long such clustering\n would last though.\n\n >If I'm right about the index\n disk-read time, then switching to a\n plain index scan rather than a bitmap\n index scan would make no\n difference--either way the data has to\n come off the disk. \n\n\n \n\n\n>>I'd prefer a\n >>strategy that allowed fast\n performance the first time, rather than\n slow the\n >>first time and extremely fast\n subsequently.\n\n\n\nHello,\n\n if the index is only used to locate rows\n for single session_id, you may consider\n split it in a set of partial indexes.\n\n e.g. \n create index i_0 on foo where session_id%4\n =0;\n create index i_1 on foo where session_id%4\n =1;\n create index i_2 on foo where session_id%4\n =2;\n create index i_3 on foo where session_id%4\n =3;\n\n (can be built in parallel using separate\n threads)\n\n Then you will have to ensure that all your\n WHERE clauses also contain the index\n condition:\n\n WHERE session_id = 27 AND session_id%4\n =27%4\n\n regards,\n\n Marc Mamin\n\n\n\n\n\n\n\n\n\n\n\n\nCould you use CLUSTER on the table after it had been\n closed off? If appropriate, that should make the queries run much\n faster, as elated entries will be in the same or\n nearby blocks on disk.",
"msg_date": "Sat, 23 Feb 2013 09:07:20 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "On Friday, February 22, 2013, Carlo Stonebanks wrote:\n\n> Hi Jeff, thanks for the reply.****\n>\n> ** **\n>\n> <<** **\n>\n> What is going on during the interregnum? Whatever it is, it seems to be\n> driving the log_2013_01_session_idx index out of the cache, but not the\n> log_2013_01 table. (Or perhaps the table visit is getting the benefit of\n> effective_io_concurrency?)\n> …****\n>\n> Rebuilding the index might help, as it would put all the leaf pages\n> holding values for session_id=27 adjacent to each other, so they would read\n> from disk faster. But with a name like \"session_id\", I don't know how long\n> such clustering would last though.****\n>\n> >>** **\n>\n> ** **\n>\n> Technically, nothing should be happening. We used to keep one massive\n> audit log, and was impossible to manage due to its size. We then changed to\n> a strategy where every month a new audit log would be spawned, and since\n> log_2013_01 represents January, the log should be closed and nothing should\n> have changed (it is technically possible that a long-running process would\n> spill over into February, but not by this much). So, assuming that it’s\n> stable, it should be a very good candidate for reindexing, no?\n>\n\nYes, assuming the problem is reading the index data from disk, that sounds\nlike a good candidate for reindexing (and maybe clustering as well).\n\n> ****\n>\n> ** **\n>\n> Our effective_io_concurrency is 1, and last I heard the PG host was a\n> LINUX 4 drive RAID10, so I don’t know if there is any benefit to raising\n> this number – and if there was any benfit, it would be to the Bitmap Scan,\n> and the problem is the data building before the fact.****\n>\n> ** **\n>\n> >> the bitmap itself doesn't get cached. But the data needed to\n> construct the bitmap does get cached. It gets cached by the generic\n> caching methods of PG and the OS, not through something specific to bitmaps.\n> <<****\n>\n> ** **\n>\n> This has always been a problem for me. I spend hours trying different\n> strategies and think I’ve solved the problem, when in fact it seems like a\n> cache has spun up, and then something else expires it and the problem is\n> back. Is there a way around this problem, can I force the expiration of a\n> cache?\n>\nYou can clear the PG cache by restarting the instance. To clear the OS\ncache as well you can do this (Linux)\n\n<stop postgres>\nsync\nsudo echo 3 > /proc/sys/vm/drop_caches\n<start postgres>\n\n\nBut I think it would be better just not to execute the same query\nrepeatedly. For example, each time you execute it during testing, pick a\ndifferent session_id rather than using 27 repeatedly. (It might also be a\ngood idea to change up the hard-coded in-list values you have, but with the\nplans you are currently seeing that isn't important as those are being used\nin a filter not a look-up)\n\nCheers,\n\nJeff\n\nOn Friday, February 22, 2013, Carlo Stonebanks wrote:\nHi Jeff, thanks for the reply. << \nWhat is going on during the interregnum? Whatever it is, it seems to be driving the log_2013_01_session_idx index out of the cache, but not the log_2013_01 table. (Or perhaps the table visit is getting the benefit of effective_io_concurrency?)\n…Rebuilding the index might help, as it would put all the leaf pages holding values for session_id=27 adjacent to each other, so they would read from disk faster. But with a name like \"session_id\", I don't know how long such clustering would last though.\n>> \nTechnically, nothing should be happening. We used to keep one massive audit log, and was impossible to manage due to its size. We then changed to a strategy where every month a new audit log would be spawned, and since log_2013_01 represents January, the log should be closed and nothing should have changed (it is technically possible that a long-running process would spill over into February, but not by this much). So, assuming that it’s stable, it should be a very good candidate for reindexing, no?\nYes, assuming the problem is reading the index data from disk, that sounds like a good candidate for reindexing (and maybe clustering as well). \n \nOur effective_io_concurrency is 1, and last I heard the PG host was a LINUX 4 drive RAID10, so I don’t know if there is any benefit to raising this number – and if there was any benfit, it would be to the Bitmap Scan, and the problem is the data building before the fact.\n >> the bitmap itself doesn't get cached. But the data needed to construct the bitmap does get cached. It gets cached by the generic caching methods of PG and the OS, not through something specific to bitmaps.\n<< \nThis has always been a problem for me. I spend hours trying different strategies and think I’ve solved the problem, when in fact it seems like a cache has spun up, and then something else expires it and the problem is back. Is there a way around this problem, can I force the expiration of a cache?\nYou can clear the PG cache by restarting the instance. To clear the OS cache as well you can do this (Linux)<stop postgres>syncsudo echo 3 > /proc/sys/vm/drop_caches\n<start postgres>But I think it would be better just not to execute the same query repeatedly. For example, each time you execute it during testing, pick a different session_id rather than using 27 repeatedly. (It might also be a good idea to change up the hard-coded in-list values you have, but with the plans you are currently seeing that isn't important as those are being used in a filter not a look-up)\nCheers,Jeff",
"msg_date": "Sat, 23 Feb 2013 08:15:12 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Are bitmap index scans slow to start?"
},
{
"msg_contents": "On Friday, February 22, 2013, Carlo Stonebanks wrote:\n\n>\n>\n> My understanding of PG’s cluster is that this is a one-time command that\n> creates a re-ordered table and doesn’t maintain the clustered order until\n> the command is issued again. During the CLUSTER, the table is read and\n> write locked. So, in order for me to use this I would need to set up a\n> timed event to CLUSTER occasionally.\n>\n\nCorrect.\n\n\n\n> ** **\n>\n> The EXPLAIN ANALYZE is showing it is taking a long time to prepare the\n> bitmap (i.e.-> Bitmap Index Scan on log_2013_01_session_idx\n> (cost=0.00..63186.52****\n>\n> rows=2947664 width=0) (actual time=32611.918..32611.918 rows=2772042\n> loops=1)\" Index Cond: (session_id = 27)\" the bitmap scan is actually very\n> fast. Jeff sasys that the bitmap is not cached, so I will assume the PG\n> general caches being created are of general use.\n>\n\nTo clarify the \"actual time\" thing, the first number is not when the node\nreceived its first row from its downstream (or when the node was started,\nif it has no downstream). I believe that that number is when the node\nproduced its first row to send upstream, and 2nd number is when it produced\nits last row. Since a bitmap index scan only produces one \"row\" (the\nbitmap itself), these number will always be the same. In other words, the\n\"actual time\" field does not give you measure of the start-up time of the\nnode. Indeed, there is no easy way to figure that out from the output of\nEXPLAIN. Or at least this is my understanding from trial and error, this\ndoesn't seem to be documented anywhere.\n\nWhat tells you that the bitmap creation is fast is that it gets much\nfaster when run on already-cached data, so the time is going to reading in\ndata, not turning the data into the bitmap.\n\n> ****\n>\n> ** **\n>\n> I think what I need to do is figure out is:****\n>\n> ** **\n>\n> **1) **Why does it take 36 seconds to set up the general index\n> caches?\n>\n\nThey are not general index caches, just general data caches. The index\npages compete with all the other data in the system. Anyway, running the\nexplains as \"explain (analyze, buffers)\" would go a long way towards\nfiguring out why it takes so long to read the index, especially if you can\nset track_io_timing = on first.\n\nAnd then the next question would be, once they are in the cache, why don't\nthey stay there? For that you would have to know what other types of\nactivities are going on that might be driving the data out of the cache.\n\nCheers,\n\nJeff\n\nOn Friday, February 22, 2013, Carlo Stonebanks wrote:\n My understanding of PG’s cluster is that this is a one-time command that creates a re-ordered table and doesn’t maintain the clustered order until the command is issued again. During the CLUSTER, the table is read and write locked. So, in order for me to use this I would need to set up a timed event to CLUSTER occasionally.\nCorrect. \n The EXPLAIN ANALYZE is showing it is taking a long time to prepare the bitmap (i.e.-> Bitmap Index Scan on log_2013_01_session_idx (cost=0.00..63186.52\nrows=2947664 width=0) (actual time=32611.918..32611.918 rows=2772042 loops=1)\" Index Cond: (session_id = 27)\" the bitmap scan is actually very fast. Jeff sasys that the bitmap is not cached, so I will assume the PG general caches being created are of general use.\nTo clarify the \"actual time\" thing, the first number is not when the node received its first row from its downstream (or when the node was started, if it has no downstream). I believe that that number is when the node produced its first row to send upstream, and 2nd number is when it produced its last row. Since a bitmap index scan only produces one \"row\" (the bitmap itself), these number will always be the same. In other words, the \"actual time\" field does not give you measure of the start-up time of the node. Indeed, there is no easy way to figure that out from the output of EXPLAIN. Or at least this is my understanding from trial and error, this doesn't seem to be documented anywhere.\nWhat tells you that the bitmap creation is fast is that it gets much faster when run on already-cached data, so the time is going to reading in data, not turning the data into the bitmap. \n I think what I need to do is figure out is:\n 1) Why does it take 36 seconds to set up the general index caches?\nThey are not general index caches, just general data caches. The index pages compete with all the other data in the system. Anyway, running the explains as \"explain (analyze, buffers)\" would go a long way towards figuring out why it takes so long to read the index, especially if you can set track_io_timing = on first.\nAnd then the next question would be, once they are in the cache, why don't they stay there? For that you would have to know what other types of activities are going on that might be driving the data out of the cache.\n Cheers,Jeff",
"msg_date": "Sat, 23 Feb 2013 09:45:55 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "Hi Jeff, thanks for the insight.\n\n \n\n<< And then the next question would be, once they are in the cache, why\ndon't they stay there? For that you would have to know what other types of\nactivities are going on that might be driving the data out of the cache.\n\n>> \n\n \n\nTo give you an idea of the activity level, each physical machine hosts\nmultiple DB's with the same structure - one DB per client.\n\n \n\nWe run automated ETL processes which digests client feeds (E) normalizes\nthem (T) and then stores them in our DB (L).\n\n \n\nLooking at the stats from our audit log, the average feed load is 4 hours,\ndivided up into 14 client sessions. Each session averages about 50 write\n(update, insert, no deletes) operations per second, representing 700 write\noperations per second. The ratio of reads per write is pretty high as the\nsystem goes through the transformation process.\n\n \n\nSince I don't know how this compares to other PG installations, the question\nof using periodic REINDEX and CLUSTER brings up these questions:\n\n \n\n1) Because we are hosting multiple DB's, what is the impact on OS and\ndisk caches?\n\n2) Is there an automated CLUSTER and REINDEX strategy that will not\ninterfere with normal operations?\n\n3) By PG standards, is this a busy DB - and does explain why the\ngeneral caches expire?\n\n \n\nThanks,\n\n \n\nCarlo\n\n\nHi Jeff, thanks for the insight. << And then the next question would be, once they are in the cache, why don't they stay there? For that you would have to know what other types of activities are going on that might be driving the data out of the cache.>> To give you an idea of the activity level, each physical machine hosts multiple DB’s with the same structure – one DB per client. We run automated ETL processes which digests client feeds (E) normalizes them (T) and then stores them in our DB (L). Looking at the stats from our audit log, the average feed load is 4 hours, divided up into 14 client sessions. Each session averages about 50 write (update, insert, no deletes) operations per second, representing 700 write operations per second. The ratio of reads per write is pretty high as the system goes through the transformation process. Since I don’t know how this compares to other PG installations, the question of using periodic REINDEX and CLUSTER brings up these questions: 1) Because we are hosting multiple DB’s, what is the impact on OS and disk caches?2) Is there an automated CLUSTER and REINDEX strategy that will not interfere with normal operations?3) By PG standards, is this a busy DB - and does explain why the general caches expire? Thanks, Carlo",
"msg_date": "Mon, 25 Feb 2013 12:04:19 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "On Mon, Feb 25, 2013 at 9:04 AM, Carlo Stonebanks <\[email protected]> wrote:\n\n> Hi Jeff, thanks for the insight.****\n>\n> ** **\n>\n> << And then the next question would be, once they are in the cache, why\n> don't they stay there? For that you would have to know what other types of\n> activities are going on that might be driving the data out of the cache.**\n> **\n>\n> >>** **\n>\n> ** **\n>\n> To give you an idea of the activity level, each physical machine hosts\n> multiple DB’s with the same structure – one DB per client.****\n>\n> ** **\n>\n> We run automated ETL processes which digests client feeds (E) normalizes\n> them (T) and then stores them in our DB (L).****\n>\n> ** **\n>\n> Looking at the stats from our audit log, the average feed load is 4 hours,\n> divided up into 14 client sessions. Each session averages about 50 write\n> (update, insert, no deletes) operations per second, representing 700 write\n> operations per second.\n>\n\nIs each of these write operations just covering a single row? Does this\ndescription apply to just one of the many (how many?) databases, so that\nthere are really 14*N concurrent sessions?\n\n\n> The ratio of reads per write is pretty high as the system goes through the\n> transformation process.****\n>\n> ** **\n>\n> Since I don’t know how this compares to other PG installations, the\n> question of using periodic REINDEX and CLUSTER brings up these questions:*\n> ***\n>\n> ** **\n>\n> **1) **Because we are hosting multiple DB’s, what is the impact on\n> OS and disk caches?\n>\n\nThey have to share the RAM. One strategy would be run ETL processes only\none at a time, rather than trying to run several concurrently, if that is\nwhat you are doing. That way you can concentrate one customers data in\nRAM, and then another's, to reduce the competition.\n\n\n> ****\n>\n> **2) **Is there an automated CLUSTER and REINDEX strategy that will\n> not interfere with normal operations?****\n>\n> **3) **By PG standards, is this a busy DB - and does explain why the\n> general caches expire?\n>\n\nYou really need to know whether those reads and writes are concentrated in\na small region (relative to the amount of your RAM), or widely scattered.\nIf you are reading and writing intensively (which you do seem to be doing)\nbut only within a compact region, then it should not drive other data out\nof the cache. But, since you do seem to have IO problems from cache\nmisses, and you do have a high level of activity, the easy conclusion is\nthat you have too little RAM to hold the working size of your data.\n\nCheers,\n\nJeff\n\nOn Mon, Feb 25, 2013 at 9:04 AM, Carlo Stonebanks <[email protected]> wrote:\nHi Jeff, thanks for the insight.\n << And then the next question would be, once they are in the cache, why don't they stay there? For that you would have to know what other types of activities are going on that might be driving the data out of the cache.\n>> \nTo give you an idea of the activity level, each physical machine hosts multiple DB’s with the same structure – one DB per client.\n We run automated ETL processes which digests client feeds (E) normalizes them (T) and then stores them in our DB (L).\n Looking at the stats from our audit log, the average feed load is 4 hours, divided up into 14 client sessions. Each session averages about 50 write (update, insert, no deletes) operations per second, representing 700 write operations per second. \nIs each of these write operations just covering a single row? Does this description apply to just one of the many (how many?) databases, so that there are really 14*N concurrent sessions?\n The ratio of reads per write is pretty high as the system goes through the transformation process.\n Since I don’t know how this compares to other PG installations, the question of using periodic REINDEX and CLUSTER brings up these questions:\n 1) Because we are hosting multiple DB’s, what is the impact on OS and disk caches?\nThey have to share the RAM. One strategy would be run ETL processes only one at a time, rather than trying to run several concurrently, if that is what you are doing. That way you can concentrate one customers data in RAM, and then another's, to reduce the competition.\n \n2) Is there an automated CLUSTER and REINDEX strategy that will not interfere with normal operations?\n3) By PG standards, is this a busy DB - and does explain why the general caches expire?\nYou really need to know whether those reads and writes are concentrated in a small region (relative to the amount of your RAM), or widely scattered. If you are reading and writing intensively (which you do seem to be doing) but only within a compact region, then it should not drive other data out of the cache. But, since you do seem to have IO problems from cache misses, and you do have a high level of activity, the easy conclusion is that you have too little RAM to hold the working size of your data.\nCheers,Jeff",
"msg_date": "Tue, 26 Feb 2013 10:11:57 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "<<Is each of these write operations just covering a single row? Does this\ndescription apply to just one of the many (how many?) databases, so that\nthere are really 14*N concurrent sessions?\n\n>> \n\n \n\nAll writes are single row. All DB's have exactly the same structure, only\nthe content is different. Currently the server is hosting five active DB's -\nalthough there 14 DB's actually on the host, the balance are backups and or\ntesting environments.\n\n \n\nWhen a feed comes in, it can be anything from dozens to millions of rows,\nand may take minutes or days to run. I had asked that PG bouncer be\ninstalled in front of the host to act as a traffic cop. Try as I may to\nconvince the engineering team that fewer sessions running faster is optimal,\nthey say that the 14 concurrent sessions is based on real-world experience\nof what imports the fastest.\n\n \n\n<< You really need to know whether those reads and writes are concentrated\nin a small region (relative to the amount of your RAM), or widely scattered.\nIf you are reading and writing intensively (which you do seem to be doing)\nbut only within a compact region, then it should not drive other data out of\nthe cache. But, since you do seem to have IO problems from cache misses,\nand you do have a high level of activity, the easy conclusion is that you\nhave too little RAM to hold the working size of your data.\n>>\n\n \n\nIt won't be a problem of physical RAM, I believe there is at least 32GB of\nRAM. What constitutes \"a compact region\"? The ETL process takes the feed and\ndistributes it to 85 core tables. I have been through many PG configuration\ncycles with the generous help of people in this forum. I think the big\nproblem when getting help has been this issue of those offering assistance\nunderstanding that the whopping majority of the time, the system is\nperforming single row reads and writes. The assumption tends to be that the\nend point of an ETL should just be a series of COPY statements, and it\nshould all happen very quickly in classic SQL bulk queries.\n\n\n<<Is each of these write operations just covering a single row? Does this description apply to just one of the many (how many?) databases, so that there are really 14*N concurrent sessions?>> All writes are single row. All DB’s have exactly the same structure, only the content is different. Currently the server is hosting five active DB’s – although there 14 DB’s actually on the host, the balance are backups and or testing environments. When a feed comes in, it can be anything from dozens to millions of rows, and may take minutes or days to run. I had asked that PG bouncer be installed in front of the host to act as a traffic cop. Try as I may to convince the engineering team that fewer sessions running faster is optimal, they say that the 14 concurrent sessions is based on real-world experience of what imports the fastest. << You really need to know whether those reads and writes are concentrated in a small region (relative to the amount of your RAM), or widely scattered. If you are reading and writing intensively (which you do seem to be doing) but only within a compact region, then it should not drive other data out of the cache. But, since you do seem to have IO problems from cache misses, and you do have a high level of activity, the easy conclusion is that you have too little RAM to hold the working size of your data.>> It won’t be a problem of physical RAM, I believe there is at least 32GB of RAM. What constitutes “a compact region”? The ETL process takes the feed and distributes it to 85 core tables. I have been through many PG configuration cycles with the generous help of people in this forum. I think the big problem when getting help has been this issue of those offering assistance understanding that the whopping majority of the time, the system is performing single row reads and writes. The assumption tends to be that the end point of an ETL should just be a series of COPY statements, and it should all happen very quickly in classic SQL bulk queries.",
"msg_date": "Tue, 26 Feb 2013 19:33:13 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "On Tue, Feb 26, 2013 at 4:33 PM, Carlo Stonebanks <\[email protected]> wrote:\n\n> <<Is each of these write operations just covering a single row? Does this\n> description apply to just one of the many (how many?) databases, so that\n> there are really 14*N concurrent sessions?****\n>\n> >>** **\n>\n> ** **\n>\n> All writes are single row. All DB’s have exactly the same structure, only\n> the content is different. Currently the server is hosting five active DB’s\n> – although there 14 DB’s actually on the host, the balance are backups and\n> or testing environments.\n>\n\nI had thought you were saying that any one ETL procedure into one database\nused 14 concurrent threads. But really, each ETL procedure is\nsingle-threaded, and there can be up to 5 (or theoretically up to 14) of\nthem running at a time into different databases?\n\n\n\n> When a feed comes in, it can be anything from dozens to millions of rows,\n> and may take minutes or days to run. I had asked that PG bouncer be\n> installed in front of the host to act as a traffic cop. Try as I may to\n> convince the engineering team that fewer sessions running faster is\n> optimal, they say that the 14 concurrent sessions is based on real-world\n> experience of what imports the fastest.\n>\n\n\npgbouncer is more for making connections line up single-file when the line\nis moving at a very fast clip, say 0.01 second per turn. If I were trying\nto make tasks that can each last for hours or days line up and take turns,\nI don't think pgbouncer would be the way to go.\n\n\n\n> ****\n>\n> ** **\n>\n> << You really need to know whether those reads and writes are\n> concentrated in a small region (relative to the amount of your RAM), or\n> widely scattered. If you are reading and writing intensively (which you do\n> seem to be doing) but only within a compact region, then it should not\n> drive other data out of the cache. But, since you do seem to have IO\n> problems from cache misses, and you do have a high level of activity, the\n> easy conclusion is that you have too little RAM to hold the working size of\n> your data.\n> >>****\n>\n> ** **\n>\n> It won’t be a problem of physical RAM, I believe there is at least 32GB of\n> RAM. What constitutes “a compact region”?\n>\n\nIf you have 14 actively going on simultaneously, I'd say a compact region\nwould then be about 512 MB.\n(32GB/ 14 / margin of safety of 4). Again, assuming that that is the\nproblem.\n\n\n> The ETL process takes the feed and distributes it to 85 core tables. I\n> have been through many PG configuration cycles with the generous help of\n> people in this forum. I think the big problem when getting help has been\n> this issue of those offering assistance understanding that the whopping\n> majority of the time, the system is performing single row reads and writes.\n> The assumption tends to be that the end point of an ETL should just be a\n> series of COPY statements, and it should all happen very quickly in classic\n> SQL bulk queries.****\n>\n\nThat is often a reasonable assumption, as ETL does end with L :)\n\nIs the original query you posted part of the transform process, rather than\nbeing the production query you run after the ETL is over?\n\nIf so, maybe you need a EL(S)TL process, were you first load the data to\nstaging table in bulk, and then transform it in bulk rather than one row at\na time.\n\nCheers,\n\nJeff\n\nOn Tue, Feb 26, 2013 at 4:33 PM, Carlo Stonebanks <[email protected]> wrote:\n<<Is each of these write operations just covering a single row? Does this description apply to just one of the many (how many?) databases, so that there are really 14*N concurrent sessions?\n>> \nAll writes are single row. All DB’s have exactly the same structure, only the content is different. Currently the server is hosting five active DB’s – although there 14 DB’s actually on the host, the balance are backups and or testing environments.\nI had thought you were saying that any one ETL procedure into one database used 14 concurrent threads. But really, each ETL procedure is single-threaded, and there can be up to 5 (or theoretically up to 14) of them running at a time into different databases?\n \nWhen a feed comes in, it can be anything from dozens to millions of rows, and may take minutes or days to run. I had asked that PG bouncer be installed in front of the host to act as a traffic cop. Try as I may to convince the engineering team that fewer sessions running faster is optimal, they say that the 14 concurrent sessions is based on real-world experience of what imports the fastest.\npgbouncer is more for making connections line up single-file when the line is moving at a very fast clip, say 0.01 second per turn. If I were trying to make tasks that can each last for hours or days line up and take turns, I don't think pgbouncer would be the way to go.\n \n << You really need to know whether those reads and writes are concentrated in a small region (relative to the amount of your RAM), or widely scattered. If you are reading and writing intensively (which you do seem to be doing) but only within a compact region, then it should not drive other data out of the cache. But, since you do seem to have IO problems from cache misses, and you do have a high level of activity, the easy conclusion is that you have too little RAM to hold the working size of your data.\n>> \nIt won’t be a problem of physical RAM, I believe there is at least 32GB of RAM. What constitutes “a compact region”? \nIf you have 14 actively going on simultaneously, I'd say a compact region would then be about 512 MB. (32GB/ 14 / margin of safety of 4). Again, assuming that that is the problem.\n The ETL process takes the feed and distributes it to 85 core tables. I have been through many PG configuration cycles with the generous help of people in this forum. I think the big problem when getting help has been this issue of those offering assistance understanding that the whopping majority of the time, the system is performing single row reads and writes. The assumption tends to be that the end point of an ETL should just be a series of COPY statements, and it should all happen very quickly in classic SQL bulk queries.\nThat is often a reasonable assumption, as ETL does end with L :)Is the original query you posted part of the transform process, rather than being the production query you run after the ETL is over?\nIf so, maybe you need a EL(S)TL process, were you first load the data to staging table in bulk, and then transform it in bulk rather than one row at a time.Cheers,Jeff",
"msg_date": "Wed, 27 Feb 2013 10:03:50 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "<< pgbouncer is more for making connections line up single-file when the\nline is moving at a very fast clip, say 0.01 second per turn. If I were\ntrying to make tasks that can each last for hours or days line up and take\nturns, I don't think pgbouncer would be the way to go.\n>>\n\n \n\nThe recommendation at the time was assuming that write contention was\nslowing things down and consuming resources, since I can't stop people from\ncreating big multi-threaded imports. Each import consists of about 50 writes\n\n\n \n\n>> Is the original query you posted part of the transform process, rather\nthan being the production query you run after the ETL is over?\n\n\n\nNeither, it is part of our auditing and maintenance processes. It is not\ncalled with any great frequency. The audit report generates rows defining\nhow the a particular item (an \"item\" being a particular table/row) was\ncreated: it returns the names of the import tables, the row ids, the write\noperations and any transformation messages that may have been generated -\nall in the order they occurred.\n\n \n\nYou can imagine how useful this in creating a document describing what\nhappened and why.\n\n \n\nThe same data generated by the report is used to \"resurrect\" an item. If -\nfor example - our business logic has changed, but the change only affects a\nsmall sub-set of our core data, then we perform a \"rollback\" (a logical\ncascading delete) on the affected items. Then we create a \"rebuild\" which is\na script that is generated to re-import ONLY the import table rows defined\nin the audit report.\n\n \n\nSo, this query is not called often, but the fact is that if it takes over 30\nseconds to load an item (because the audit report takes so long to prepare\nthe bitmap index scan when passed new query parameters) then it severely\nrestricts how much data we can resurrect at any one time.\n\n\n<< pgbouncer is more for making connections line up single-file when the line is moving at a very fast clip, say 0.01 second per turn. If I were trying to make tasks that can each last for hours or days line up and take turns, I don't think pgbouncer would be the way to go.>> The recommendation at the time was assuming that write contention was slowing things down and consuming resources, since I can’t stop people from creating big multi-threaded imports. Each import consists of about 50 writes >> Is the original query you posted part of the transform process, rather than being the production query you run after the ETL is over?Neither, it is part of our auditing and maintenance processes. It is not called with any great frequency. The audit report generates rows defining how the a particular item (an “item” being a particular table/row) was created: it returns the names of the import tables, the row ids, the write operations and any transformation messages that may have been generated – all in the order they occurred. You can imagine how useful this in creating a document describing what happened and why. The same data generated by the report is used to “resurrect” an item. If – for example - our business logic has changed, but the change only affects a small sub-set of our core data, then we perform a “rollback” (a logical cascading delete) on the affected items. Then we create a “rebuild” which is a script that is generated to re-import ONLY the import table rows defined in the audit report. So, this query is not called often, but the fact is that if it takes over 30 seconds to load an item (because the audit report takes so long to prepare the bitmap index scan when passed new query parameters) then it severely restricts how much data we can resurrect at any one time.",
"msg_date": "Wed, 27 Feb 2013 16:38:29 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "<< \n\nI had thought you were saying that any one ETL procedure into one database\nused 14 concurrent threads. But really, each ETL procedure is\nsingle-threaded, and there can be up to 5 (or theoretically up to 14) of\nthem running at a time into different databases?\n>>\n\n \n\nSorry, just caught this. \n\n \n\nYour first interpretation was correct. Each DB runs an ETL that can have up\nto 14 concurrent threads. I don't think the number should be that high, but\nthe engineering team insists the load time is better than fewer threads\nrunning faster.\n\n\n<< I had thought you were saying that any one ETL procedure into one database used 14 concurrent threads. But really, each ETL procedure is single-threaded, and there can be up to 5 (or theoretically up to 14) of them running at a time into different databases?>> Sorry, just caught this. Your first interpretation was correct. Each DB runs an ETL that can have up to 14 concurrent threads. I don’t think the number should be that high, but the engineering team insists the load time is better than fewer threads running faster.",
"msg_date": "Thu, 28 Feb 2013 00:12:48 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "<<Could you use CLUSTER on the table after it had been closed off? If\nappropriate, that should make the queries run much faster, as elated entries\nwill be in the same or nearby blocks on disk.\n\n>> \n\n \n\nTechnically, yes. That would really help, but the issue is scheduling.\nAlthough the logs are closed off for writes, they aren't closed off for\nreads, ref PG documentation: \"When a table is being clustered, an ACCESS\nEXCLUSIVE lock is acquired on it. This prevents any other database\noperations (both reads and writes) from operating on the table until the\nCLUSTER is finished.\"\n\n \n\nNot ideal, but a lot better than doing nothing at all!\n\n\n<<Could you use CLUSTER on the table after it had been closed off? If appropriate, that should make the queries run much faster, as elated entries will be in the same or nearby blocks on disk.>> Technically, yes. That would really help, but the issue is scheduling. Although the logs are closed off for writes, they aren’t closed off for reads, ref PG documentation: “When a table is being clustered, an ACCESS EXCLUSIVE lock is acquired on it. This prevents any other database operations (both reads and writes) from operating on the table until the CLUSTER is finished.” Not ideal, but a lot better than doing nothing at all!",
"msg_date": "Thu, 28 Feb 2013 15:13:50 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "On Thu, Feb 28, 2013 at 12:13 PM, Carlo Stonebanks <\[email protected]> wrote:\n\n> <<Could you use CLUSTER on the table after it had been closed off? If\n> appropriate, that should make the queries run much faster, as elated\n> entries will be in the same or nearby blocks on disk.****\n>\n> >>** **\n>\n> ** **\n>\n> Technically, yes. That would really help, but the issue is scheduling.\n> Although the logs are closed off for writes, they aren’t closed off for\n> reads, ref PG documentation: “When a table is being clustered, an ACCESS\n> EXCLUSIVE lock is acquired on it. This prevents any other database\n> operations (both reads and writes) from operating on the table until the\n> CLUSTER is finished.”****\n>\n> ** **\n>\n> Not ideal, but a lot better than doing nothing at all!\n>\n\nSince it is read only, you could make a copy of the table, cluster the copy\n(or just do the sorting while you make the copy), and then atomically swap\nthe two tables by renaming them inside a single transaction.\n\nThe swap process will take an exclusive lock, but it will only last for a\nfraction of second rather than the duration of the clustering operation.\n\nCheers,\n\nJeff\n\nOn Thu, Feb 28, 2013 at 12:13 PM, Carlo Stonebanks <[email protected]> wrote:\n<<Could you use CLUSTER on the table after it had been closed off? If appropriate, that should make the queries run much faster, as elated entries will be in the same or nearby blocks on disk.\n>> \nTechnically, yes. That would really help, but the issue is scheduling. Although the logs are closed off for writes, they aren’t closed off for reads, ref PG documentation: “When a table is being clustered, an ACCESS EXCLUSIVE lock is acquired on it. This prevents any other database operations (both reads and writes) from operating on the table until the CLUSTER is finished.”\n Not ideal, but a lot better than doing nothing at all!\n Since it is read only, you could make a copy of the table, cluster the copy (or just do the sorting while you make the copy), and then atomically swap the two tables by renaming them inside a single transaction.\nThe swap process will take an exclusive lock, but it will only last for a fraction of second rather than the duration of the clustering operation.Cheers,Jeff",
"msg_date": "Tue, 5 Mar 2013 09:46:34 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "On Wed, Feb 27, 2013 at 1:38 PM, Carlo Stonebanks <\[email protected]> wrote:\n\n>\n>\n> >> Is the original query you posted part of the transform process, rather\n> than being the production query you run after the ETL is over?\n>\n> ****\n>\n> Neither, it is part of our auditing and maintenance processes. It is not\n> called with any great frequency. The audit report generates rows defining\n> how the a particular item (an “item” being a particular table/row) was\n> created: it returns the names of the import tables, the row ids, the write\n> operations and any transformation messages that may have been generated –\n> all in the order they occurred.****\n>\n> ** **\n>\n\n...\n\n\n> **\n>\n> So, this query is not called often, but the fact is that if it takes over\n> 30 seconds to load an item (because the audit report takes so long to\n> prepare the bitmap index scan when passed new query parameters) then it\n> severely restricts how much data we can resurrect at any one time.****\n>\n\nIs that a restriction you have observed, or are you extrapolating based on\na single query? If you run a bunch of similar queries in close succession,\nit is likely that the first few queries will warm up the cache, and\nfollowing queries will then run much faster. Also, if you restructure the\nseries of queries into a large one that reconstructs many rows\nsimultaneously, it might choose a more efficient path than if it is fed the\nqueries one at a time.\n\nCheers,\n\nJeff\n\nOn Wed, Feb 27, 2013 at 1:38 PM, Carlo Stonebanks <[email protected]> wrote:\n >> Is the original query you posted part of the transform process, rather than being the production query you run after the ETL is over?\nNeither, it is part of our auditing and maintenance processes. It is not called with any great frequency. The audit report generates rows defining how the a particular item (an “item” being a particular table/row) was created: it returns the names of the import tables, the row ids, the write operations and any transformation messages that may have been generated – all in the order they occurred.\n ... \n So, this query is not called often, but the fact is that if it takes over 30 seconds to load an item (because the audit report takes so long to prepare the bitmap index scan when passed new query parameters) then it severely restricts how much data we can resurrect at any one time.\nIs that a restriction you have observed, or are you extrapolating based on a single query? If you run a bunch of similar queries in close succession, it is likely that the first few queries will warm up the cache, and following queries will then run much faster. Also, if you restructure the series of queries into a large one that reconstructs many rows simultaneously, it might choose a more efficient path than if it is fed the queries one at a time.\nCheers,Jeff",
"msg_date": "Tue, 5 Mar 2013 13:20:32 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are bitmap index scans slow to start?"
},
{
"msg_contents": "Sorry this took so long to get back to you. Here is where we were:\n\nI said: <<\n\nSo, this query is not called often, but the fact is that if it takes over 30\nseconds to load an item (because the audit report takes so long to prepare\nthe bitmap index scan when passed new query parameters) then it severely\nrestricts how much data we can resurrect at any one time.\n\n>> \n\n \n\nYour reply: <<\nIs that a restriction you have observed, or are you extrapolating based on a\nsingle query? If you run a bunch of similar queries in close succession, it\nis likely that the first few queries will warm up the cache, and following\nqueries will then run much faster. Also, if you restructure the series of\nqueries into a large one that reconstructs many rows simultaneously, it\nmight choose a more efficient path than if it is fed the queries one at a\ntime.\n>>\n\n \n\nActual observation. The first run with a new parameter actually takes 90\nseconds. Another run with the same parameter takes 15-30 seconds. Running\nthe query immediately afterwards with different parameters starts with a new\n90 seconds query. Unfortunately, since going to LINUX, our sys ops hiss and\nsnarl at anyone who comes anywhere near machine or DB server configs, so I\nam no longer well informed on how well optimized the machines are.\n\n \n\nUltimately, the machines need to be optimized by an expert. As I mentioned\nbefore, our ETL is entirely single-load reads-and-writes (I didn't go into\nthe \"why\" of this because the nature of the data and the product dictates\nthis). And this is an example of one of the few complex joins that return\nhundreds/thousands of rows. The problem is that a full index scan has to be\ndone before we can start building the results. So, if clustering will help\nsuch that the index scan KNOWS that there's no point is scanning the rest of\nthe index because we've gone beyond the maximum value in the list of\npossible values, then that would help, as each table being scanned has 50 -\n100 million rows (there is one table for every month of production).\n\n \n\nAs always, thanks.\n\n \n\nFrom: Jeff Janes [mailto:[email protected]] \nSent: March 5, 2013 4:21 PM\nTo: Carlo Stonebanks\nCc: [email protected]\nSubject: Re: [PERFORM] Are bitmap index scans slow to start?\n\n \n\nOn Wed, Feb 27, 2013 at 1:38 PM, Carlo Stonebanks\n<[email protected]> wrote:\n\n \n\n>> Is the original query you posted part of the transform process, rather\nthan being the production query you run after the ETL is over?\n\nNeither, it is part of our auditing and maintenance processes. It is not\ncalled with any great frequency. The audit report generates rows defining\nhow the a particular item (an \"item\" being a particular table/row) was\ncreated: it returns the names of the import tables, the row ids, the write\noperations and any transformation messages that may have been generated -\nall in the order they occurred.\n\n \n\n \n...\n\n \n\nSo, this query is not called often, but the fact is that if it takes over 30\nseconds to load an item (because the audit report takes so long to prepare\nthe bitmap index scan when passed new query parameters) then it severely\nrestricts how much data we can resurrect at any one time.\n\n\nIs that a restriction you have observed, or are you extrapolating based on a\nsingle query? If you run a bunch of similar queries in close succession, it\nis likely that the first few queries will warm up the cache, and following\nqueries will then run much faster. Also, if you restructure the series of\nqueries into a large one that reconstructs many rows simultaneously, it\nmight choose a more efficient path than if it is fed the queries one at a\ntime.\n\nCheers,\n\nJeff\n\n\nSorry this took so long to get back to you. Here is where we were:I said: <<So, this query is not called often, but the fact is that if it takes over 30 seconds to load an item (because the audit report takes so long to prepare the bitmap index scan when passed new query parameters) then it severely restricts how much data we can resurrect at any one time.>> Your reply: <<Is that a restriction you have observed, or are you extrapolating based on a single query? If you run a bunch of similar queries in close succession, it is likely that the first few queries will warm up the cache, and following queries will then run much faster. Also, if you restructure the series of queries into a large one that reconstructs many rows simultaneously, it might choose a more efficient path than if it is fed the queries one at a time.>> Actual observation. The first run with a new parameter actually takes 90 seconds. Another run with the same parameter takes 15-30 seconds. Running the query immediately afterwards with different parameters starts with a new 90 seconds query. Unfortunately, since going to LINUX, our sys ops hiss and snarl at anyone who comes anywhere near machine or DB server configs, so I am no longer well informed on how well optimized the machines are. Ultimately, the machines need to be optimized by an expert. As I mentioned before, our ETL is entirely single-load reads-and-writes (I didn’t go into the “why” of this because the nature of the data and the product dictates this). And this is an example of one of the few complex joins that return hundreds/thousands of rows. The problem is that a full index scan has to be done before we can start building the results. So, if clustering will help such that the index scan KNOWS that there’s no point is scanning the rest of the index because we’ve gone beyond the maximum value in the list of possible values, then that would help, as each table being scanned has 50 – 100 million rows (there is one table for every month of production). As always, thanks. From: Jeff Janes [mailto:[email protected]] Sent: March 5, 2013 4:21 PMTo: Carlo StonebanksCc: [email protected]: Re: [PERFORM] Are bitmap index scans slow to start? On Wed, Feb 27, 2013 at 1:38 PM, Carlo Stonebanks <[email protected]> wrote: >> Is the original query you posted part of the transform process, rather than being the production query you run after the ETL is over?Neither, it is part of our auditing and maintenance processes. It is not called with any great frequency. The audit report generates rows defining how the a particular item (an “item” being a particular table/row) was created: it returns the names of the import tables, the row ids, the write operations and any transformation messages that may have been generated – all in the order they occurred. ... So, this query is not called often, but the fact is that if it takes over 30 seconds to load an item (because the audit report takes so long to prepare the bitmap index scan when passed new query parameters) then it severely restricts how much data we can resurrect at any one time.Is that a restriction you have observed, or are you extrapolating based on a single query? If you run a bunch of similar queries in close succession, it is likely that the first few queries will warm up the cache, and following queries will then run much faster. Also, if you restructure the series of queries into a large one that reconstructs many rows simultaneously, it might choose a more efficient path than if it is fed the queries one at a time.Cheers,Jeff",
"msg_date": "Fri, 8 Mar 2013 16:27:54 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Are bitmap index scans slow to start?"
}
] |
[
{
"msg_contents": "I have a planner problem that looks like a bug, but I'm not familiar enough with how planner the works to say for sure.\n\nThis is my schema:\n\n create table comments (\n id serial primary key,\n conversation_id integer,\n created_at timestamp\n );\n create index comments_conversation_id_index on comments (conversation_id);\n create index comments_created_at_index on comments (created_at);\n\nThe table has 3.5M rows, and 650k unique values for \"conversation_id\", where the histogram goes up to 54000 rows for the most frequent ID, with a long tail. There are only 20 values with a frequency of 1000 or higher. The \"created_at\" column has 3.5M distinct values.\n\nNow, I have this query:\n\n select comments.id from comments where\n conversation_id = 3975979 order by created_at limit 13\n\n\nThis filters about 5000 rows and returns the oldest 13 rows. But the query is consistently planned wrong:\n\n Limit (cost=0.00..830.53 rows=13 width=12) (actual time=3174.862..3179.525 rows=13 loops=1) \n Buffers: shared hit=2400709 read=338923 written=21 \n -> Index Scan using comments_created_at_index on comments (cost=0.00..359938.52 rows=5634 width=12) (actual time=3174.860..3179.518 rows=13 loops=1)\n Filter: (conversation_id = 3975979) \n Rows Removed by Filter: 2817751 \n Buffers: shared hit=2400709 read=338923 written=21 \n Total runtime: 3179.553 ms \n\n\n\nIt takes anywhere between 3 seconds and several minutes to run, depending on how warm the disk cache is. This is the correct plan and index usage:\n\n Limit (cost=6214.34..6214.38 rows=13 width=12) (actual time=25.471..25.473 rows=13 loops=1) \n Buffers: shared hit=197 read=4510 \n -> Sort (cost=6214.34..6228.02 rows=5471 width=12) (actual time=25.469..25.470 rows=13 loops=1) \n Sort Key: created_at \n Sort Method: top-N heapsort Memory: 25kB \n Buffers: shared hit=197 read=4510 \n -> Index Scan using comments_conversation_id_index on comments (cost=0.00..6085.76 rows=5471 width=12) (actual time=1.163..23.955 rows=5834 loops=1)\n Index Cond: (conversation_id = 3975979) \n Buffers: shared hit=197 read=4510 \n Total runtime: 25.500 ms \n\n\n\n\nNow, the problem for Postgres is obviously to estimate how many rows have a given \"conversation_id\" value, but it does have that number. I'm at a loss how to explain why it thinks scanning a huge index that covers the entire table will ever beat scanning a small index that has 17% of the table's values.\n\nIt will consistently use the bad plan for higher-frequency values, and the good plan for lower-frequency values.\n\nIf I run ANALYZE repeatedly, the planner will sometimes, oddly enough, choose the correct plan. This behaviour actually seems related to effective_cache_size; if it's set small (128MB), the planner will sometimes favour the good plan, but if large (2GB) it will consistently use the bad plan.\n\nI have bumped the statistics target up to 10000, but it does not help. I have also tried setting n_distinct for the column manually, since Postgres guesses it's 285k instead of 650k, but that does not help.\n\nWhat is odd is that the bad plan is really never correct in any situation for this query. It will *always* be better to branch off the \"comments_conversation_id_index\" index.\n\nOur environment: 9.2.2, tweaked memory parameters (work_mem, sort_mem, shared_buffers, effective_cache_size), not touched planner/cost parameters. Problem also exists on 9.2.3.\n\n(Please CC me as I'm not subscribed to the list. Thanks.) \n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Feb 2013 20:44:27 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad query plan with high-cardinality column"
}
] |
[
{
"msg_contents": "I'm trying to create a view that uses a window function, but it seems that\nPostgres is apparently unable to optimize it. Here's a reproduction of my\nsituation with 9.2.2:\n\n---\n\ndrop table if exists values cascade; create table values ( fkey1 integer\nnot null, fkey2 integer not null, fkey3 integer not null, value float not\nnull, constraint values_pkey primary key (fkey1, fkey2, fkey3) ); -- Kind\nof hacky, but it roughly resembles my dataset. insert into values select\ndistinct on (fkey1, fkey2, fkey3) i / 12 + 1 as fkey1, i % 4 + 1 as fkey2,\nceil(random() * 10) as fkey3, random() * 2 - 1 as value from\ngenerate_series(0, 199999) i; create or replace view values_view as select\nfkey1, fkey3, (derived1 / max(derived1) over (partition by fkey1)) as\nderived1, (derived2 / sum(derived1) over (partition by fkey1)) as derived2\nfrom ( select fkey1, fkey3, cast(sum((case when (value > 0.0) then 4 else 1\nend)) as double precision) as derived1, sum((case when (value > 0.0) then\n(value * 4) else (value + 1) end)) as derived2 from values group by fkey1,\nfkey3 ) as t1;\n-- This query requires a sequential scan on values, though all the data it\nneeds could be found much more efficiently with an index scan. explain\nanalyze select * from values_view where fkey1 = 1263;\n\n---\n\nCan anyone suggest a way to rewrite this query, or maybe a workaround of\nsome kind?\n\nThanks, Chris\n\nI'm trying to create a view that uses a window function, but it seems that Postgres is apparently unable to optimize it. Here's a reproduction of my situation with 9.2.2:\n\n---\ndrop table if exists values cascade;\n\ncreate table values (\n fkey1 integer not null,\n fkey2 integer not null,\n fkey3 integer not null,\n value float not null,\n constraint values_pkey primary key (fkey1, fkey2, fkey3)\n);\n\n-- Kind of hacky, but it roughly resembles my dataset.\ninsert into values\nselect distinct on (fkey1, fkey2, fkey3)\n i / 12 + 1 as fkey1,\n i % 4 + 1 as fkey2,\n ceil(random() * 10) as fkey3,\n random() * 2 - 1 as value\nfrom generate_series(0, 199999) i;\n\ncreate or replace view values_view as \nselect fkey1, fkey3,\n (derived1 / max(derived1) over (partition by fkey1)) as derived1,\n (derived2 / sum(derived1) over (partition by fkey1)) as derived2\nfrom (\n select fkey1, fkey3,\n cast(sum((case when (value > 0.0) then 4 else 1 end)) as double precision) as derived1,\n sum((case when (value > 0.0) then (value * 4) else (value + 1) end)) as derived2\n from values\n group by fkey1, fkey3\n) as t1;\n-- This query requires a sequential scan on values, though all the data it needs could be found much more efficiently with an index scan.\nexplain analyze select * from values_view where fkey1 = 1263;\n---\nCan anyone suggest a way to rewrite this query, or maybe a workaround of some kind?\nThanks, Chris",
"msg_date": "Thu, 21 Feb 2013 15:37:10 -0800",
"msg_from": "Chris Hanks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using a window function in a view"
}
] |
[
{
"msg_contents": "hello,\n\ni have a strange and reproducible bug with some select queries and 64bit \npostgresql builds (works fine on 32bit builds).\nThe postgres process will run with 100% cpu-load (no io-wait) and strace \nwill show endless lseek(..., SEEK_END) calls on one table for minutes.\nlseek(28, 0, SEEK_END) = 26697728\nlseek(28, 0, SEEK_END) = 26697728\nlseek(28, 0, SEEK_END) = 26697728\n...\nthe file-descriptor 28 points to the file for the webapps_base.Acquisition \ntable (see query/plan below).\n\nNow the details:\n\nThe Query:\nselect count(ac.ID) as col_0_0_ from \n\twebapps_base.Acquisition ac, \n\twebapps_base.SalesPartnerStructure struc\n\twhere \n\t\tstruc.fk_SalesPartner_child=ac.fk_SalesPartner_ID \n\t\tand struc.fk_SalesPartner_parent=200\n\t\tand (ac.CreationDate between '2012-02-01' and '2013-01-31') \n\t\tand ac.acquisitiondepot='STANDARD' \n\t\tand ('2013-01-31' between struc.ValidFrom \n\t\t\tand coalesce(struc.ValidTo, '2013-01-31'))\n\nThe Plan:\n\"Aggregate (cost=32617.11..32617.12 rows=1 width=8) (actual time=204.180..204.180 rows=1 loops=1)\"\n\" -> Merge Join (cost=32232.01..32598.26 rows=7543 width=8) (actual time=172.882..202.218 rows=21111 loops=1)\"\n\" Merge Cond: (ac.fk_salespartner_id = struc.fk_salespartner_child)\"\n\" -> Sort (cost=5582.60..5635.69 rows=21235 width=16) (actual time=28.920..31.121 rows=21204 loops=1)\"\n\" Sort Key: ac.fk_salespartner_id\"\n\" Sort Method: quicksort Memory: 1763kB\"\n\" -> Bitmap Heap Scan on acquisition ac (cost=395.26..4056.43 rows=21235 width=16) (actual time=3.064..15.868 rows=21223 loops=1)\"\n\" Recheck Cond: ((creationdate >= '2012-02-01'::date) AND (creationdate <= '2013-01-31'::date))\"\n\" Filter: ((acquisitiondepot)::text = 'STANDARD'::text)\"\n\" -> Bitmap Index Scan on index_acquistion_creationdate (cost=0.00..389.95 rows=21267 width=0) (actual time=2.890..2.890 rows=21514 loops=1)\"\n\" Index Cond: ((creationdate >= '2012-02-01'::date) AND (creationdate <= '2013-01-31'::date))\"\n\" -> Sort (cost=26648.60..26742.61 rows=37606 width=8) (actual time=143.952..152.808 rows=131713 loops=1)\"\n\" Sort Key: struc.fk_salespartner_child\"\n\" Sort Method: quicksort Memory: 8452kB\"\n\" -> Bitmap Heap Scan on salespartnerstructure struc (cost=3976.80..23790.79 rows=37606 width=8) (actual time=13.279..64.681 rows=114772 loops=1)\"\n\" Recheck Cond: (fk_salespartner_parent = 200)\"\n\" Filter: (('2013-01-31'::date >= validfrom) AND ('2013-01-31'::date <= COALESCE(validto, '2013-01-31'::date)))\"\n\" -> Bitmap Index Scan on index_parent_salespartner (cost=0.00..3967.39 rows=114514 width=0) (actual time=13.065..13.065 rows=116479 loops=1)\"\n\" Index Cond: (fk_salespartner_parent = 200)\"\n\"Total runtime: 205.397 ms\"\n\nas you can see the query runs fine. \nI can run this query from a bash-psql-while-loop/jdbc-cli-tool \nendless without any problems. \nso far so good.\n\nBut now i run the same query from:\n\nJBoss EAP 5.1.2 with connection pooling and xa-datasource/two-phase-commits \n(transactions on multiple datasources needed)\n*and* *<prepared-statement-cache-size>1000</prepared-statement-cache-size>*\n\ni can run the query four times with good performance and after that postgresql \nstarts with the strange lseek() behavior. \nThe query needs more then a minute to complete and during execution the \npostgres process runs at 100% cpu load with lseek calls (straced).\nIf i flush the connection pool (close all open connections from the jboss \njmx-console) it works again for four calls.\nThese problem applies only to 64bit builds. If i run a 32bit postgresql \nserver it works fine.\n\nWe have tested the following environments:\n\n- Debian Squeeze 64bit with Postgresql 9.1.[5,6,7] -> Bad behavior\n- Debian Wheezy 64bit with Postgresql 9.1.8 64bit -> Bad behavior\n- Ubuntu 12.04 LTS 64bit with Postgresql 9.1.8 64bit -> Bad behavior\n- Windows 7 x64 with Postgresql 9.1.8 64bit -> Bad behavior\n- Debian Wheezy 64bit with EnterpriseDB 9.2 64bit -> Bad behavior\n\n- Debian Wheezy 64bit with Postgresql 9.1.8 32bit -> Good behavior\n- Debian Wheezy 32bit with Postgresql 9.1.8 32bit -> Good behavior\n\nas you can see all 64bit builds of postgresql are affected (independent from os-arch).\n\nIf i disable the prepared-statement-cache-size (remove it from -ds.xml) \nit works on 64bit build too.\n\nregards,\nmsc\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 09:25:23 +0100",
"msg_from": "Markus Schulz <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG: endless lseek(.., SEEK_END) from select queries on x64 builds"
},
{
"msg_contents": "On 22.02.2013 10:25, Markus Schulz wrote:\n> i can run the query four times with good performance and after that postgresql\n> starts with the strange lseek() behavior.\n\nBy default, the JDBC driver re-plans the prepared statement for the \nfirst 4 invocations of the query. On the fifth invocation, it switches \nto using a generic plan, which will be reused on subsequent invocations. \nSee http://jdbc.postgresql.org/documentation/head/server-prepare.html. \nThe generic plan seems to perform much worse in this case. You can \ndisable that mechanism and force re-planning the query every time by \nsetting the \"prepareThreshold=0\" parameter on the data source.\n\nYou could check what the generic plan looks like by taking the query \nused in the java program, with the parameter markers, and running \nEXPLAIN on that.\n\nPostgreSQL version 9.2 might work better in this case. It has some \nsmarts in the server to generate parameter-specific plans even when \nprepared statements are used, if the planner thinks a specific plan will \nbe faster.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 15:35:25 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG: endless lseek(..,\n SEEK_END) from select queries on x64 builds"
},
{
"msg_contents": "Markus,\n\n* Markus Schulz ([email protected]) wrote:\n> as you can see the query runs fine. \n> I can run this query from a bash-psql-while-loop/jdbc-cli-tool \n> endless without any problems. \n> so far so good.\n[...]\n> JBoss EAP 5.1.2 with connection pooling and xa-datasource/two-phase-commits \n> (transactions on multiple datasources needed)\n> *and* *<prepared-statement-cache-size>1000</prepared-statement-cache-size>*\n> \n> i can run the query four times with good performance and after that postgresql \n> starts with the strange lseek() behavior. \n\nIt sounds like your bash script and JBoss are doing something different.\nWould it be possible for you to turn on log_statements = 'all' in PG,\nsee what's different, and then update the bash/psql script to do exactly\nwhat JBoss does, and see if you can reproduce it that way?\n\nIt certainly looks like a PG bug, but it'd be a lot easier to debug with\na simple, well-defined test case which shows the failure.\n\n\tThanks!\n\n\t\tStephen",
"msg_date": "Fri, 22 Feb 2013 08:37:23 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG: endless lseek(..,\n SEEK_END) from select queries on x64 builds"
},
{
"msg_contents": "Am Freitag, 22. Februar 2013, 14:35:25 schrieb Heikki Linnakangas:\n> On 22.02.2013 10:25, Markus Schulz wrote:\n> > i can run the query four times with good performance and after that\n> > postgresql starts with the strange lseek() behavior.\n> \n> By default, the JDBC driver re-plans the prepared statement for the\n> first 4 invocations of the query. On the fifth invocation, it switches\n> to using a generic plan, which will be reused on subsequent invocations.\n\nthat sounds really interesting and i would try to change my java-jdbc-test-cli \nprogram according to that, but ...\n\n> See http://jdbc.postgresql.org/documentation/head/server-prepare.html.\n> The generic plan seems to perform much worse in this case. You can\n> disable that mechanism and force re-planning the query every time by\n> setting the \"prepareThreshold=0\" parameter on the data source.\n\nit wouldn't explain why the same jboss runs fine with a 32bit postgresql \nserver (i switched only the datasource to another server with exactly the same \ndatabase).\n\n> You could check what the generic plan looks like by taking the query\n> used in the java program, with the parameter markers, and running\n> EXPLAIN on that.\n\nhow can i do this?\nI've tried the following in my ejb-test-function to:\n\nString query = \"...\"\nentitymanager.createNativeQuery(query)...;\nentitymanager.createNativeQuery(\"EXPLAIN ANALYZE \" + query)...;\n\nbut the second createNativeQuery call runs fast every time and will show the \nsame plan and the first hangs after the fourth call to this function.\n\n> PostgreSQL version 9.2 might work better in this case. It has some\n> smarts in the server to generate parameter-specific plans even when\n> prepared statements are used, if the planner thinks a specific plan will\n> be faster.\n\nthis wouldn't help:\n> - Debian Wheezy 64bit with EnterpriseDB 9.2 64bit -> Bad behavior\n\nwe tried postgresql 9.2 too\n\n> - Heikki\n\nregards,\nmsc\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 19:10:25 +0100",
"msg_from": "Markus Schulz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BUG: endless lseek(..,\n SEEK_END) from select queries on x64 builds"
},
{
"msg_contents": "On 22.02.2013 20:10, Markus Schulz wrote:\n> Am Freitag, 22. Februar 2013, 14:35:25 schrieb Heikki Linnakangas:\n>> You could check what the generic plan looks like by taking the query\n>> used in the java program, with the parameter markers, and running\n>> EXPLAIN on that.\n>\n> how can i do this?\n> I've tried the following in my ejb-test-function to:\n>\n> String query = \"...\"\n> entitymanager.createNativeQuery(query)...;\n> entitymanager.createNativeQuery(\"EXPLAIN ANALYZE \" + query)...;\n>\n> but the second createNativeQuery call runs fast every time and will show the\n> same plan and the first hangs after the fourth call to this function.\n\nYou can take the query, replace the ? parameter markers with $1, $2, and \nso forth, and explain it with psql like this:\n\nprepare foo (text) as select * from mytable where id = $1;\nexplain analyze execute foo ('foo');\n\nOn 9.2, though, this will explain the specific plan for those \nparameters, so it might not be any different from what you already \nEXPLAINed.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 23 Feb 2013 17:54:26 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG: endless lseek(..,\n SEEK_END) from select queries on x64 builds"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> You can take the query, replace the ? parameter markers with $1, $2, and \n> so forth, and explain it with psql like this:\n\n> prepare foo (text) as select * from mytable where id = $1;\n> explain analyze execute foo ('foo');\n\n> On 9.2, though, this will explain the specific plan for those \n> parameters, so it might not be any different from what you already \n> EXPLAINed.\n\nYou can tell whether you're getting a generic or custom plan by noting\nwhether the explain output contains $n symbols or the values you put in.\nIn 9.2, the first five attempts will always produce custom plans, but\non the sixth and subsequent attempts you will get a generic plan, if\nthe plancache logic decides that it's not getting any benefit out of\ncustom plans. Here's a trivial example:\n\nregression=# prepare foo(int) as select * from tenk1 where unique1 = $1;\nPREPARE\nregression=# explain execute foo(42);\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 (cost=0.29..8.30 rows=1 width=244)\n Index Cond: (unique1 = 42)\n(2 rows)\n\nregression=# explain execute foo(42);\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 (cost=0.29..8.30 rows=1 width=244)\n Index Cond: (unique1 = 42)\n(2 rows)\n\nregression=# explain execute foo(42);\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 (cost=0.29..8.30 rows=1 width=244)\n Index Cond: (unique1 = 42)\n(2 rows)\n\nregression=# explain execute foo(42);\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 (cost=0.29..8.30 rows=1 width=244)\n Index Cond: (unique1 = 42)\n(2 rows)\n\nregression=# explain execute foo(42);\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 (cost=0.29..8.30 rows=1 width=244)\n Index Cond: (unique1 = 42)\n(2 rows)\n\nregression=# explain execute foo(42);\n QUERY PLAN \n-----------------------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 (cost=0.29..8.30 rows=1 width=244)\n Index Cond: (unique1 = $1)\n(2 rows)\n\nIt's switched to a generic plan after observing that the custom plans\nweren't any cheaper. Once that happens, subsequent attempts will use\nthe generic plan. (Of course, in a scenario where the custom plans do\nprovide a benefit, it'll keep using those.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 23 Feb 2013 11:14:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG: endless lseek(..,\n SEEK_END) from select queries on x64 builds"
}
] |
[
{
"msg_contents": "I got a problem with the performance of a PL/PGsql stored procedure \noutputting an xml.\n\n/Server version:/ PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC \ngcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46)\n/CPU/: Intel(R) Core(TM) i3 CPU 540 @ 3.07GHz\n/RAM installed:/ 4GB\n/Hard Disk:/ Seagate 500Gb SATA 2\n\nThis is a simplified content of the function showing the xmlconcat \nbehaviour.\n\nCREATE OR REPLACE FUNCTION test_function (v_limit int)\n RETURNS xml AS\n$BODY$\nDECLARE\n v_xml xml;\nBEGIN\n\n FOR i IN 1..v_limit LOOP\n v_xml := xmlconcat(v_xml, xmlelement(name content, 'aaaaaaa'));\n END LOOP;\n\n RETURN v_xml ;\nEND\n$BODY$\n LANGUAGE 'plpgsql' SECURITY DEFINER ;\n\n\nAs long as the v_limit parameter grows (and then the size of the output \nxml, the time needed increase exponentially.\nLook at this examples:\n\npang=# explain analyze select test_function(1000);\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=65.430..65.431 \nrows=1 loops=1)\n Total runtime: 65.457 ms\n(2 rows)\n\npang=# explain analyze select test_function(5000);\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=473.318..473.318 \nrows=1 loops=1)\n Total runtime: 473.340 ms\n(2 rows)\n\npang=# explain analyze select test_function(15000);\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual \ntime=4044.903..4044.904 rows=1 loops=1)\n Total runtime: 4044.928 ms\n(2 rows)\n\npang=# explain analyze select test_function(50000);\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual \ntime=94994.337..94994.369 rows=1 loops=1)\n Total runtime: 94994.396 ms\n(2 rows)\n\nI already tried to update to 8.3.23 service version but i didn't see any \nimprovement.\n\nDo you have any suggestion about how to increase the performance of \nxmlconcat?\n\nMy need is to use stored procedures that calls xmlconcat more than 50000 \ntimes, but it is unacceptable 94 seconds to complete the job.\n\nThanks in advance\n\n\n\n\n\n\n\nI got a problem with\n the performance of a PL/PGsql stored procedure outputting an xml.\n\nServer version:\n PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n 4.1.2 20080704 (Red Hat 4.1.2-46)\nCPU: Intel(R) Core(TM) i3 CPU 540 @ 3.07GHz\nRAM installed: 4GB\nHard Disk: Seagate 500Gb SATA 2\n\nThis is a simplified\n content of the function showing the xmlconcat behaviour.\n\nCREATE OR REPLACE\n FUNCTION test_function (v_limit int)\n RETURNS xml AS\n $BODY$\n DECLARE\n v_xml xml;\n BEGIN\n\n FOR i IN 1..v_limit LOOP\n v_xml := xmlconcat(v_xml, xmlelement(name content,\n 'aaaaaaa'));\n END LOOP;\n\n RETURN v_xml ;\n END\n $BODY$\n LANGUAGE 'plpgsql' SECURITY DEFINER ;\n\n\n As long as the v_limit parameter grows (and then the size of the\n output xml, the time needed increase exponentially.\n Look at this examples:\n\npang=# explain\n analyze select test_function(1000);\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual\n time=65.430..65.431 rows=1 loops=1)\n Total runtime: 65.457 ms\n (2 rows)\n\npang=# explain\n analyze select test_function(5000);\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual\n time=473.318..473.318 rows=1 loops=1)\n Total runtime: 473.340 ms\n (2 rows)\npang=# explain\n analyze select test_function(15000);\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual\n time=4044.903..4044.904 rows=1 loops=1)\n Total runtime: 4044.928 ms\n (2 rows)\n\npang=# explain\n analyze select test_function(50000);\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual\n time=94994.337..94994.369 rows=1 loops=1)\n Total runtime: 94994.396 ms\n (2 rows)\n\n\nI already tried to\n update to 8.3.23 service version but i didn't see any\n improvement. \nDo you have any\n suggestion about how to increase the performance of xmlconcat?\n\nMy need is to use\n stored procedures that calls xmlconcat more than 50000 times, but\n it is unacceptable 94 seconds to complete the job.\n\nThanks in advance",
"msg_date": "Fri, 22 Feb 2013 10:21:55 +0100",
"msg_from": "Davide Berra <[email protected]>",
"msg_from_op": true,
"msg_subject": "xmlconcat performance"
},
{
"msg_contents": "On Fri, Feb 22, 2013 at 3:21 AM, Davide Berra <[email protected]> wrote:\n> I got a problem with the performance of a PL/PGsql stored procedure\n> outputting an xml.\n>\n> Server version: PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC gcc\n> (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46)\n> CPU: Intel(R) Core(TM) i3 CPU 540 @ 3.07GHz\n> RAM installed: 4GB\n> Hard Disk: Seagate 500Gb SATA 2\n>\n> This is a simplified content of the function showing the xmlconcat\n> behaviour.\n>\n> CREATE OR REPLACE FUNCTION test_function (v_limit int)\n> RETURNS xml AS\n> $BODY$\n> DECLARE\n> v_xml xml;\n> BEGIN\n>\n> FOR i IN 1..v_limit LOOP\n> v_xml := xmlconcat(v_xml, xmlelement(name content, 'aaaaaaa'));\n> END LOOP;\n>\n> RETURN v_xml ;\n> END\n> $BODY$\n> LANGUAGE 'plpgsql' SECURITY DEFINER ;\n>\n>\n> As long as the v_limit parameter grows (and then the size of the output xml,\n> the time needed increase exponentially.\n> Look at this examples:\n>\n> pang=# explain analyze select test_function(1000);\n> QUERY PLAN\n> --------------------------------------------------------------------------------------\n> Result (cost=0.00..0.26 rows=1 width=0) (actual time=65.430..65.431 rows=1\n> loops=1)\n> Total runtime: 65.457 ms\n> (2 rows)\n>\n> pang=# explain analyze select test_function(5000);\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------\n> Result (cost=0.00..0.26 rows=1 width=0) (actual time=473.318..473.318\n> rows=1 loops=1)\n> Total runtime: 473.340 ms\n> (2 rows)\n>\n> pang=# explain analyze select test_function(15000);\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------\n> Result (cost=0.00..0.26 rows=1 width=0) (actual time=4044.903..4044.904\n> rows=1 loops=1)\n> Total runtime: 4044.928 ms\n> (2 rows)\n>\n> pang=# explain analyze select test_function(50000);\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------\n> Result (cost=0.00..0.26 rows=1 width=0) (actual time=94994.337..94994.369\n> rows=1 loops=1)\n> Total runtime: 94994.396 ms\n> (2 rows)\n>\n> I already tried to update to 8.3.23 service version but i didn't see any\n> improvement.\n>\n> Do you have any suggestion about how to increase the performance of\n> xmlconcat?\n>\n> My need is to use stored procedures that calls xmlconcat more than 50000\n> times, but it is unacceptable 94 seconds to complete the job.\n>\n> Thanks in advance\n\ntypically for high performance string manipulation you have to do\nthings on more purely textual level and manipulate through arrays to\nget really good performance. iterative string concatenation is\ntypically wrong approach -- you have to think in set terms.\n\nalso your database version is obsolete -- time to start thinking about upgrade.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Feb 2013 11:48:05 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xmlconcat performance"
},
{
"msg_contents": "On Fri, Mar 1, 2013 at 2:18 AM, Davide Berra <[email protected]> wrote:\n> Il 28/02/2013 18:48, Merlin Moncure ha scritto:\n>\n>> On Fri, Feb 22, 2013 at 3:21 AM, Davide Berra <[email protected]>\n>> wrote:\n>>>\n>>> I got a problem with the performance of a PL/PGsql stored procedure\n>>> outputting an xml.\n>>>\n>>> Server version: PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC\n>>> gcc\n>>> (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46)\n>>> CPU: Intel(R) Core(TM) i3 CPU 540 @ 3.07GHz\n>>> RAM installed: 4GB\n>>> Hard Disk: Seagate 500Gb SATA 2\n>>>\n>>> This is a simplified content of the function showing the xmlconcat\n>>> behaviour.\n>>>\n>>> CREATE OR REPLACE FUNCTION test_function (v_limit int)\n>>> RETURNS xml AS\n>>> $BODY$\n>>> DECLARE\n>>> v_xml xml;\n>>> BEGIN\n>>>\n>>> FOR i IN 1..v_limit LOOP\n>>> v_xml := xmlconcat(v_xml, xmlelement(name content, 'aaaaaaa'));\n>>> END LOOP;\n>>>\n>>> RETURN v_xml ;\n>>> END\n>>> $BODY$\n>>> LANGUAGE 'plpgsql' SECURITY DEFINER ;\n>>>\n>>>\n>>> As long as the v_limit parameter grows (and then the size of the output\n>>> xml,\n>>> the time needed increase exponentially.\n>>> Look at this examples:\n>>>\n>>> pang=# explain analyze select test_function(1000);\n>>> QUERY PLAN\n>>>\n>>> --------------------------------------------------------------------------------------\n>>> Result (cost=0.00..0.26 rows=1 width=0) (actual time=65.430..65.431\n>>> rows=1\n>>> loops=1)\n>>> Total runtime: 65.457 ms\n>>> (2 rows)\n>>>\n>>> pang=# explain analyze select test_function(5000);\n>>> QUERY PLAN\n>>>\n>>> ----------------------------------------------------------------------------------------\n>>> Result (cost=0.00..0.26 rows=1 width=0) (actual time=473.318..473.318\n>>> rows=1 loops=1)\n>>> Total runtime: 473.340 ms\n>>> (2 rows)\n>>>\n>>> pang=# explain analyze select test_function(15000);\n>>> QUERY PLAN\n>>>\n>>> ------------------------------------------------------------------------------------------\n>>> Result (cost=0.00..0.26 rows=1 width=0) (actual\n>>> time=4044.903..4044.904\n>>> rows=1 loops=1)\n>>> Total runtime: 4044.928 ms\n>>> (2 rows)\n>>>\n>>> pang=# explain analyze select test_function(50000);\n>>> QUERY PLAN\n>>>\n>>> --------------------------------------------------------------------------------------------\n>>> Result (cost=0.00..0.26 rows=1 width=0) (actual\n>>> time=94994.337..94994.369\n>>> rows=1 loops=1)\n>>> Total runtime: 94994.396 ms\n>>> (2 rows)\n>>>\n>>> I already tried to update to 8.3.23 service version but i didn't see any\n>>> improvement.\n>>>\n>>> Do you have any suggestion about how to increase the performance of\n>>> xmlconcat?\n>>>\n>>> My need is to use stored procedures that calls xmlconcat more than 50000\n>>> times, but it is unacceptable 94 seconds to complete the job.\n>>>\n>>> Thanks in advance\n>>\n>> typically for high performance string manipulation you have to do\n>> things on more purely textual level and manipulate through arrays to\n>> get really good performance. iterative string concatenation is\n>> typically wrong approach -- you have to think in set terms.\n>>\n>> also your database version is obsolete -- time to start thinking about\n>> upgrade.\n>>\n>> merlin\n>>\n> Thank you for the reply Merlin but i don't fully get what you mean. (sorry,\n> i'm not a PostgreSQL expert)\n> How would you change the above example function in order to improve\n> performance?\n> What do you mean with \"manipulate through arrays\"?\n\nwell arrays, or simple aggregation. for example:\nselect string_agg(v, '') from (select 'aaaaaaa'::text as v from\ngenerate_series(1,50000)) q;\n\nruns in ~ 30 ms.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 1 Mar 2013 08:09:25 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xmlconcat performance"
}
] |
[
{
"msg_contents": "The following query produces a Recheck Cond and a costly Bitmap Heap Scan\neven though I have a composite index that covers both columns being filtered\nand selected. I believe this is because the initial bitmap scan produces\n2912 rows, which is too many for the available bitmap space. I've tried\nrewriting the command as \"Select ... group by\" but it still uses the BHS. Is\nthere a way to rewrite this command that would improve performance by\navoiding the costly Bitmap Heap Scan?\n\n\nSELECT distinct store_id, book_id FROM \"sales_points\" WHERE\n\"sales_points\".\"store_id\" IN (1, 2, 3, 4, 5, 6, 199, 201, 202) AND\n\"sales_points\".\"book_id\" IN (421, 422, 419, 420)\n\nHere is the explain/analyze output:\n\n\n\"HashAggregate (cost=5938.72..5939.01 rows=97 width=8) (actual\ntime=10.837..10.854 rows=32 loops=1)\"\n\" -> Bitmap Heap Scan on sales_points (cost=47.03..5936.53 rows=2191\nwidth=8) (actual time=0.547..5.296 rows=4233 loops=1)\"\n\" Recheck Cond: (book_id = ANY ('{421,422,419,420}'::integer[]))\"\n\" Filter: (store_id = ANY ('{1,2,3,4,5,6,199,201,202}'::integer[]))\"\n\" -> Bitmap Index Scan on index_sales_points_on_book_id \n(cost=0.00..46.92 rows=4430 width=0) (actual time=0.469..0.469 rows=4233\nloops=1)\"\n\" Index Cond: (book_id = ANY ('{421,422,419,420}'::integer[]))\"\n\"Total runtime: 10.935 ms\"\n\n\nActual runtime is more like 15ms when tested against a development database\n(which gave est. total runtime of 6ms). Under load in production, the\ncommand takes 10,158 ms. Tuning Postgre is not an option, as the instance\nis provided by Heroku and as far as I know cannot be tuned by me.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Avoiding-Recheck-Cond-when-using-Select-Distinct-tp5746290.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 08:36:16 -0800 (PST)",
"msg_from": "jackrg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoiding Recheck Cond when using Select Distinct"
},
{
"msg_contents": "On Fri, Feb 22, 2013 at 8:36 AM, jackrg <[email protected]>wrote:\n\n> The following query produces a Recheck Cond and a costly Bitmap Heap Scan\n> even though I have a composite index that covers both columns being\n> filtered\n> and selected.\n\n\nCan you show us the definition of that index?\n\n\n> I believe this is because the initial bitmap scan produces\n> 2912 rows, which is too many for the available bitmap space. I've tried\n> rewriting the command as \"Select ... group by\" but it still uses the BHS.\n> Is\n> there a way to rewrite this command that would improve performance by\n> avoiding the costly Bitmap Heap Scan?\n>\n\n\nHow do you know that the bitmap heap scan is costly, since you haven't\ngotten it to use an alternative to compare it to? As a temporary\nexperimental measure, you could at least set enable_bitmapscan TO off, to\nsee what happens.\n\n\n>\n>\n> SELECT distinct store_id, book_id FROM \"sales_points\" WHERE\n> \"sales_points\".\"store_id\" IN (1, 2, 3, 4, 5, 6, 199, 201, 202) AND\n> \"sales_points\".\"book_id\" IN (421, 422, 419, 420)\n>\n> Here is the explain/analyze output:\n>\n>\n> \"HashAggregate (cost=5938.72..5939.01 rows=97 width=8) (actual\n> time=10.837..10.854 rows=32 loops=1)\"\n> \" -> Bitmap Heap Scan on sales_points (cost=47.03..5936.53 rows=2191\n> width=8) (actual time=0.547..5.296 rows=4233 loops=1)\"\n> \" Recheck Cond: (book_id = ANY ('{421,422,419,420}'::integer[]))\"\n> \" Filter: (store_id = ANY ('{1,2,3,4,5,6,199,201,202}'::integer[]))\"\n> \" -> Bitmap Index Scan on index_sales_points_on_book_id\n> (cost=0.00..46.92 rows=4430 width=0) (actual time=0.469..0.469 rows=4233\n> loops=1)\"\n> \" Index Cond: (book_id = ANY\n> ('{421,422,419,420}'::integer[]))\"\n> \"Total runtime: 10.935 ms\"\n>\n>\n> Actual runtime is more like 15ms when tested against a development database\n> (which gave est. total runtime of 6ms).\n\n\n\nI don't understand the parenthetical. In the explain plan you show, where\nis 6ms coming from?\n\n\n\n> Under load in production, the\n> command takes 10,158 ms.\n\n\nDo you mean 10.158 ms rather than 10,158 ms? If the production environment\nreally takes 1000 times longer than the environment in which you gathered\nthe EXPLAIN, then I would seriously doubt how useful that EXPLAIN could\npossibly be.\n\nCheers,\n\nJeff\n\nOn Fri, Feb 22, 2013 at 8:36 AM, jackrg <[email protected]> wrote:\nThe following query produces a Recheck Cond and a costly Bitmap Heap Scan\neven though I have a composite index that covers both columns being filtered\nand selected. Can you show us the definition of that index? I believe this is because the initial bitmap scan produces\n\n2912 rows, which is too many for the available bitmap space. I've tried\nrewriting the command as \"Select ... group by\" but it still uses the BHS. Is\nthere a way to rewrite this command that would improve performance by\navoiding the costly Bitmap Heap Scan?How do you know that the bitmap heap scan is costly, since you haven't gotten it to use an alternative to compare it to? As a temporary experimental measure, you could at least set enable_bitmapscan TO off, to see what happens. \n \n\n\nSELECT distinct store_id, book_id FROM \"sales_points\" WHERE\n\"sales_points\".\"store_id\" IN (1, 2, 3, 4, 5, 6, 199, 201, 202) AND\n\"sales_points\".\"book_id\" IN (421, 422, 419, 420)\n\nHere is the explain/analyze output:\n\n\n\"HashAggregate (cost=5938.72..5939.01 rows=97 width=8) (actual\ntime=10.837..10.854 rows=32 loops=1)\"\n\" -> Bitmap Heap Scan on sales_points (cost=47.03..5936.53 rows=2191\nwidth=8) (actual time=0.547..5.296 rows=4233 loops=1)\"\n\" Recheck Cond: (book_id = ANY ('{421,422,419,420}'::integer[]))\"\n\" Filter: (store_id = ANY ('{1,2,3,4,5,6,199,201,202}'::integer[]))\"\n\" -> Bitmap Index Scan on index_sales_points_on_book_id\n(cost=0.00..46.92 rows=4430 width=0) (actual time=0.469..0.469 rows=4233\nloops=1)\"\n\" Index Cond: (book_id = ANY ('{421,422,419,420}'::integer[]))\"\n\"Total runtime: 10.935 ms\"\n\n\nActual runtime is more like 15ms when tested against a development database\n(which gave est. total runtime of 6ms).I don't understand the parenthetical. In the explain plan you show, where is 6ms coming from? \n Under load in production, the\ncommand takes 10,158 ms. Do you mean 10.158 ms rather than 10,158 ms? If the production environment really takes 1000 times longer than the environment in which you gathered the EXPLAIN, then I would seriously doubt how useful that EXPLAIN could possibly be.\n Cheers,Jeff",
"msg_date": "Fri, 22 Feb 2013 09:19:27 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Recheck Cond when using Select Distinct"
},
{
"msg_contents": "2013/2/22 jackrg <[email protected]>\n\n> Tuning Postgre is not an option, as the instance\n> is provided by Heroku and as far as I know cannot be tuned by me.\n>\n> Most tuning parameters can be set at per-query basis, so you can issue\nalter database set param=value\nto have same effect as if it was set through postgresql.conf.\n\n2013/2/22 jackrg <[email protected]>\nTuning Postgre is not an option, as the instance\nis provided by Heroku and as far as I know cannot be tuned by me.Most tuning parameters can be set at per-query basis, so you can issue alter database set param=value\nto have same effect as if it was set through postgresql.conf.",
"msg_date": "Fri, 22 Feb 2013 19:59:40 +0200",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Recheck Cond when using Select Distinct"
},
{
"msg_contents": "On Fri, Feb 22, 2013 at 9:59 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n>> Tuning Postgre is not an option, as the instance\n>> is provided by Heroku and as far as I know cannot be tuned by me.\n>>\n> Most tuning parameters can be set at per-query basis, so you can issue\n> alter database set param=value\n> to have same effect as if it was set through postgresql.conf.\n\nJack,\n\nJeff brought up some great points and What Vitalii suggested should\nlet you tweak most knobs, but if you're running into limitations of\nthe platform or you find default settings which seem outright\nincorrect, please file a support ticket and we'll be happy to work\nwith you.\n\nThanks,\nMaciek\nHeroku Postgres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 11:06:08 -0800",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Recheck Cond when using Select Distinct"
},
{
"msg_contents": "On Fri, Feb 22, 2013 at 9:45 AM, Jack Royal-Gordon <\[email protected]> wrote:\n\n>\n> On Feb 22, 2013, at 9:19 AM, Jeff Janes <[email protected]> wrote:\n>\n> On Fri, Feb 22, 2013 at 8:36 AM, jackrg <[email protected]>wrote:\n>\n>> The following query produces a Recheck Cond and a costly Bitmap Heap Scan\n>> even though I have a composite index that covers both columns being\n>> filtered\n>> and selected.\n>\n>\n> Can you show us the definition of that index?\n>\n>\n> CREATE INDEX test ON sales_points USING btree (book_id, store_id);\n>\n\nWhen I made up some random data that I thought was plausible, the planner\ndid choose to use this use this index. You might have some skew in your\ndata that we need to know more about in order to reproduce this (But that\nis probably not worthwhile, see below)\n\n\n>\n>\n>\n>> I believe this is because the initial bitmap scan produces\n>> 2912 rows, which is too many for the available bitmap space. I've tried\n>> rewriting the command as \"Select ... group by\" but it still uses the BHS.\n>> Is\n>> there a way to rewrite this command that would improve performance by\n>> avoiding the costly Bitmap Heap Scan?\n>>\n>\n>\n> How do you know that the bitmap heap scan is costly, since you haven't\n> gotten it to use an alternative to compare it to? As a temporary\n> experimental measure, you could at least set enable_bitmapscan TO off, to\n> see what happens.\n>\n>\n> Here's the \"Explan Analze\" output after setting enable_bitmapscan off:\n> \"HashAggregate (cost=8202.02..8203.33 rows=131 width=8) (actual\n> time=4.275..4.280 rows=32 loops=1)\"\n> \" -> Index Only Scan using test on sales_points (cost=0.01..8187.46\n> rows=2912 width=8) (actual time=0.083..3.268 rows=4233 loops=1)\"\n> \" Index Cond: ((book_id = ANY ('{421,422,419,420}'::integer[])) AND\n> (store_id = ANY ('{1,2,3,4,5,6,199,201,202}'::integer[])))\"\n> \" Heap Fetches: 4233\"\n> \"Total runtime: 4.331 ms\"\n>\n> While the total runtime reported is actually less, the costs reported are\n> 50% higher.\n>\n\n\nI suspect this is because the planner assumes that IO is rather expensive,\nand the bitmap plan saves on IO slightly while taking more CPU. But since\nall your data is in memory in this case, the IO savings are not real, while\nthe extra CPU time is real. However, this is probably not relevant.\nShaving 5ms off of a 10ms execution time looks impressive, but shaving 5ms\noff of your problematic case of 10s isn't going to do much for you.\n\nIt could be that the time savings extrapolated to the problem case will be\nmultiplicative rather than additive. But my gut is telling me it won't be\nmultiplicative. In fact it will probably go the other way, if the bitmap\ntruly is more IO efficient, and your problematic case is due to poor IO\nperformance, then getting it off the bitmap scan could easily make things\nworse.\n\n\n Under load in production, the\n>> command takes 10,158 ms.\n>\n>\n> Do you mean 10.158 ms rather than 10,158 ms? If the production\n> environment really takes 1000 times longer than the environment in which\n> you gathered the EXPLAIN, then I would seriously doubt how useful that\n> EXPLAIN could possibly be.\n>\n>\n> That's right, 10K ms. That's why I started trying to tune the performance\n> of this query -- I had a warning in my log of a long-running query. Two\n> thoughts why these numbers might be so different: 1) in production but not\n> in the standalone test, many jobs are hitting the database simultaneously,\n> so perhaps there was a great deal of I/O wait time involved;\n>\n\nIf the server is so overwhelmed that it takes 1000 times longer, I think it\nwould be setting off alarm bells all over the place, rather than with just\nthis one query. Unless all that other activity is driving your data out of\nthe cache so this query need to read it back from disk when on dev system\nit doesn't need to. But that means you need to find a way to simulate the\nactual cache hit on your dev system, or you won't be able to do realistic\ntesting.\n\nIt would really help to have \"explain (analyze, buffers)\". Especially if\nyou turn on track_io_timing, (although that part probably can't be done on\nHeroku, as it requires superuser access.)\n\nIt would help even more if you can get that information while the server is\nmisbehaving, rather than when it is not. Does Heroku support auto_explain?\n\n\n\n> 2) production and the standalone test were separated by about 8 hours, and\n> its possible that some sort of auto-vacuum operation had taken place to\n> make the execution more efficient.\n>\n\nMore likely would be that some operation caused it to change to a different\nplan that is inherently inefficient, rather than changing the efficiency of\na given plan. That is where auto_explain would help. Also, if the bad\nexecution is coming through your app, while the execution through psql,\nthen maybe your app is secretly changing some of the planner settings.\n\nCheers,\n\nJeff\n\nOn Fri, Feb 22, 2013 at 9:45 AM, Jack Royal-Gordon <[email protected]> wrote:\nOn Feb 22, 2013, at 9:19 AM, Jeff Janes <[email protected]> wrote:\nOn Fri, Feb 22, 2013 at 8:36 AM, jackrg <[email protected]> wrote:\n\nThe following query produces a Recheck Cond and a costly Bitmap Heap Scan\neven though I have a composite index that covers both columns being filtered\nand selected. Can you show us the definition of that index?CREATE INDEX test ON sales_points USING btree (book_id, store_id);\nWhen I made up some random data that I thought was plausible, the planner did choose to use this use this index. You might have some skew in your data that we need to know more about in order to reproduce this (But that is probably not worthwhile, see below)\n \n I believe this is because the initial bitmap scan produces\n\n2912 rows, which is too many for the available bitmap space. I've tried\nrewriting the command as \"Select ... group by\" but it still uses the BHS. Is\nthere a way to rewrite this command that would improve performance by\navoiding the costly Bitmap Heap Scan?How do you know that the bitmap heap scan is costly, since you haven't gotten it to use an alternative to compare it to? As a temporary experimental measure, you could at least set enable_bitmapscan TO off, to see what happens. \nHere's the \"Explan Analze\" output after setting enable_bitmapscan off:\"HashAggregate (cost=8202.02..8203.33 rows=131 width=8) (actual time=4.275..4.280 rows=32 loops=1)\"\n\" -> Index Only Scan using test on sales_points (cost=0.01..8187.46 rows=2912 width=8) (actual time=0.083..3.268 rows=4233 loops=1)\"\" Index Cond: ((book_id = ANY ('{421,422,419,420}'::integer[])) AND (store_id = ANY ('{1,2,3,4,5,6,199,201,202}'::integer[])))\"\n\" Heap Fetches: 4233\"\"Total runtime: 4.331 ms\"While the total runtime reported is actually less, the costs reported are 50% higher.\nI suspect this is because the planner assumes that IO is rather expensive, and the bitmap plan saves on IO slightly while taking more CPU. But since all your data is in memory in this case, the IO savings are not real, while the extra CPU time is real. However, this is probably not relevant. Shaving 5ms off of a 10ms execution time looks impressive, but shaving 5ms off of your problematic case of 10s isn't going to do much for you. \nIt could be that the time savings extrapolated to the problem case will be multiplicative rather than additive. But my gut is telling me it won't be multiplicative. In fact it will probably go the other way, if the bitmap truly is more IO efficient, and your problematic case is due to poor IO performance, then getting it off the bitmap scan could easily make things worse.\n \n\n Under load in production, the\ncommand takes 10,158 ms. Do you mean 10.158 ms rather than 10,158 ms? If the production environment really takes 1000 times longer than the environment in which you gathered the EXPLAIN, then I would seriously doubt how useful that EXPLAIN could possibly be.\nThat's right, 10K ms. That's why I started trying to tune the performance of this query -- I had a warning in my log of a long-running query. Two thoughts why these numbers might be so different: 1) in production but not in the standalone test, many jobs are hitting the database simultaneously, so perhaps there was a great deal of I/O wait time involved; \nIf the server is so overwhelmed that it takes 1000 times longer, I think it would be setting off alarm bells all over the place, rather than with just this one query. Unless all that other activity is driving your data out of the cache so this query need to read it back from disk when on dev system it doesn't need to. But that means you need to find a way to simulate the actual cache hit on your dev system, or you won't be able to do realistic testing.\nIt would really help to have \"explain (analyze, buffers)\". Especially if you turn on track_io_timing, (although that part probably can't be done on Heroku, as it requires superuser access.)It would help even more if you can get that information while the server is misbehaving, rather than when it is not. Does Heroku support auto_explain?\n 2) production and the standalone test were separated by about 8 hours, and its possible that some sort of auto-vacuum operation had taken place to make the execution more efficient.\nMore likely would be that some operation caused it to change to a different plan that is inherently inefficient, rather than changing the efficiency of a given plan. That is where auto_explain would help. Also, if the bad execution is coming through your app, while the execution through psql, then maybe your app is secretly changing some of the planner settings. \nCheers,Jeff",
"msg_date": "Sat, 23 Feb 2013 15:53:01 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Recheck Cond when using Select Distinct"
},
{
"msg_contents": "On Sat, Feb 23, 2013 at 3:53 PM, Jeff Janes <[email protected]> wrote:\n> It would really help to have \"explain (analyze, buffers)\". Especially if\n> you turn on track_io_timing, (although that part probably can't be done on\n> Heroku, as it requires superuser access.)\n\nRight, that's not supported right now, although given that the\nsuperuser is primarily for performance considerations (right?),\nperhaps we should find some way of exposing this.\n\n> It would help even more if you can get that information while the server is\n> misbehaving, rather than when it is not. Does Heroku support auto_explain?\n\nWe used to have some support for it, but given that all the tunables\nthere are superuser-only, we found it wasn't very useful in its\ncurrent form and took it out. Same as above: we probably should find a\nway of exposing the tunables here to a less-trusted user. It'd be\ngreat to have more granularity in some of these permissions.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Feb 2013 10:11:11 -0800",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Recheck Cond when using Select Distinct"
},
{
"msg_contents": "On Mon, Feb 25, 2013 at 10:11 AM, Maciek Sakrejda <[email protected]>wrote:\n\n> On Sat, Feb 23, 2013 at 3:53 PM, Jeff Janes <[email protected]> wrote:\n> > It would really help to have \"explain (analyze, buffers)\". Especially if\n> > you turn on track_io_timing, (although that part probably can't be done\n> on\n> > Heroku, as it requires superuser access.)\n>\n> Right, that's not supported right now, although given that the\n> superuser is primarily for performance considerations (right?),\n> perhaps we should find some way of exposing this.\n>\n\n\nI don't think it is SUSET for performance reasons, as the ordinary user\nalready has plenty of ways to shoot themselves (and fellow users) in the\nfoot performance-wise. I think it was based on the idea that those\ntracking tools that the administrator has turned on, ordinary users may\nturn off. I think the other way around would be fine (if it is off for the\nserver, the user can still turn it on for their session--and presumably\nalso turn it off again if it is one only because they set it that way, not\nbecause the administrator set it that way), but I think that that behavior\nis not trivial to implement. I looked in the archives, but the SUSET\nnature of this doesn't seem to have been discussed.\n\nCheers,\n\nJeff\n\nOn Mon, Feb 25, 2013 at 10:11 AM, Maciek Sakrejda <[email protected]> wrote:\nOn Sat, Feb 23, 2013 at 3:53 PM, Jeff Janes <[email protected]> wrote:\n> It would really help to have \"explain (analyze, buffers)\". Especially if\n> you turn on track_io_timing, (although that part probably can't be done on\n> Heroku, as it requires superuser access.)\n\nRight, that's not supported right now, although given that the\nsuperuser is primarily for performance considerations (right?),\nperhaps we should find some way of exposing this.I don't think it is SUSET for performance reasons, as the ordinary user already has plenty of ways to shoot themselves (and fellow users) in the foot performance-wise. I think it was based on the idea that those tracking tools that the administrator has turned on, ordinary users may turn off. I think the other way around would be fine (if it is off for the server, the user can still turn it on for their session--and presumably also turn it off again if it is one only because they set it that way, not because the administrator set it that way), but I think that that behavior is not trivial to implement. I looked in the archives, but the SUSET nature of this doesn't seem to have been discussed.\n Cheers,Jeff",
"msg_date": "Mon, 25 Feb 2013 10:48:51 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Recheck Cond when using Select Distinct"
}
] |
[
{
"msg_contents": "I have a problem with a query that is planned wrong. This is my schema:\n\n create table comments (\n id serial primary key,\n conversation_id integer,\n created_at timestamp\n );\n create index comments_conversation_id_index on comments (conversation_id);\n create index comments_created_at_index on comments (created_at);\n\nThe table has 3.5M rows, and 650k unique values for \"conversation_id\", where the histogram goes up to 54000 rows for the most frequent ID, with a long tail. There are only 20 values with a frequency of 1000 or higher. The \"created_at\" column has 3.5M distinct values.\n\nNow, I have this query:\n\n select comments.id from comments where\n conversation_id = 3975979 order by created_at limit 13\n\nThis filters about 5000 rows and returns the oldest 13 rows. But the query is consistently planned wrong:\n\n Limit (cost=0.00..830.53 rows=13 width=12) (actual time=3174.862..3179.525 rows=13 loops=1) \n Buffers: shared hit=2400709 read=338923 written=21 \n -> Index Scan using comments_created_at_index on comments (cost=0.00..359938.52 rows=5634 width=12) (actual time=3174.860..3179.518 rows=13 loops=1)\n Filter: (conversation_id = 3975979) \n Rows Removed by Filter: 2817751 \n Buffers: shared hit=2400709 read=338923 written=21 \n Total runtime: 3179.553 ms \n\nIt takes anywhere between 3 seconds and several minutes to run, depending on how warm the disk cache is. This is the correct plan and index usage:\n\n Limit (cost=6214.34..6214.38 rows=13 width=12) (actual time=25.471..25.473 rows=13 loops=1) \n Buffers: shared hit=197 read=4510 \n -> Sort (cost=6214.34..6228.02 rows=5471 width=12) (actual time=25.469..25.470 rows=13 loops=1) \n Sort Key: created_at \n Sort Method: top-N heapsort Memory: 25kB \n Buffers: shared hit=197 read=4510 \n -> Index Scan using comments_conversation_id_index on comments (cost=0.00..6085.76 rows=5471 width=12) (actual time=1.163..23.955 rows=5834 loops=1)\n Index Cond: (conversation_id = 3975979) \n Buffers: shared hit=197 read=4510 \n Total runtime: 25.500 ms \n\nThe problem for Postgres is obviously to estimate how many rows have a given \"conversation_id\" value, but I have confirmed that the value is correctly tracked in the histogram.\n\nI'm at a loss how to explain why the planner thinks scanning a huge index that covers the entire table will ever beat scanning a small index that has 17% of the table's values. Even if the entire database were in RAM, this would hit way too much buffers unnecessarily. (I have determined that planner will consistently use the bad plan for higher-frequency values, and the good plan for lower-frequency values.) It will *always* be better to branch off the \"comments_conversation_id_index\" index.\n\nAnother curious thing: If I run ANALYZE repeatedly, the planner will sometimes, oddly enough, choose the correct plan. This behaviour actually seems related to effective_cache_size; if it's set small (128MB), the planner will sometimes favour the good plan, but if large (>= 2GB) it will consistently use the bad plan. Not sure if ANALYZE is changing anything or if it's just bad timing.\n\nThings I have tried: I have bumped the statistics target up to 10000, but it does not help. I have also tried setting n_distinct for the column manually, since Postgres guesses it's 285k instead of 650k, but that does not help.\n\nOur environment: 9.2.2, tweaked memory parameters (work_mem, sort_mem, shared_buffers, effective_cache_size), not touched planner/cost parameters. Problem also exists on 9.2.3. \n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 19:22:48 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad query plan with high-cardinality column"
},
{
"msg_contents": "Alexander Staubo <[email protected]> writes:\n> select comments.id from comments where\n> conversation_id = 3975979 order by created_at limit 13\n\n> I'm at a loss how to explain why the planner thinks scanning a huge\n> index that covers the entire table will ever beat scanning a small index\n> that has 17% of the table's values.\n\nThe reason is that the LIMIT may stop the query before it's scanned all\nof the index. The planner estimates on the assumption that the desired\nrows are roughly uniformly distributed within the created_at index, and\non that assumption, it looks like this query will stop fairly soon ...\nbut evidently, that's wrong. On the other hand, it knows quite well\nthat the other plan will require pulling out 5000-some rows and then\nsorting them before it can return anything, so that's not going to be\nexactly instantaneous either.\n\nIn this example, I'll bet that conversation_id and created_at are pretty\nstrongly correlated, and that most or all of the rows with that specific\nconversation_id are quite far down the created_at ordering, so that the\nsearch through the index takes a long time to run. OTOH, with another\nconversation_id the same plan might run almost instantaneously.\n\nIf you're concerned mostly with this type of query then a 2-column index\non (conversation_id, created_at) would serve your purposes nicely. You\ncould likely even dispense with the separate index on conversation_id\nalone.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 15:33:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan with high-cardinality column"
},
{
"msg_contents": "Alexander Staubo <[email protected]> wrote:\n\n> This is my schema:\n>\n> create table comments (\n> id serial primary key,\n> conversation_id integer,\n> created_at timestamp\n> );\n> create index comments_conversation_id_index on comments (conversation_id);\n> create index comments_created_at_index on comments (created_at);\n\nI suspect you would be better off without those two indexes, and\ninstead having an index on (conversation_id, created_at). Not just\nfor the query you show, but in general.\n\n> select comments.id from comments where\n> conversation_id = 3975979 order by created_at limit 13\n>\n> This filters about 5000 rows and returns the oldest 13 rows. But\n> the query is consistently planned wrong:\n\n> [planner thinks it will be cheaper to read index in ORDER BY\n> sequence and filter rows until it has 13 than to read 5471 rows\n> and sort them to pick the top 13 after the sort.]\n\nIn my experience these problems come largely from the planner not\nknowing the cost of dealing with each tuple. I see a lot less of\nthis if I raise cpu_tuple_cost to something in the 0.03 to 0.05\nrange.\n\n-- \nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 12:47:56 -0800 (PST)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan with high-cardinality column"
},
{
"msg_contents": "On Friday, February 22, 2013 at 21:33 , Tom Lane wrote:\n> The reason is that the LIMIT may stop the query before it's scanned all\n> of the index. The planner estimates on the assumption that the desired\n> rows are roughly uniformly distributed within the created_at index, and\n> on that assumption, it looks like this query will stop fairly soon ...\n> but evidently, that's wrong. On the other hand, it knows quite well\n> that the other plan will require pulling out 5000-some rows and then\n> sorting them before it can return anything, so that's not going to be\n> exactly instantaneous either.\n> \n> In this example, I'll bet that conversation_id and created_at are pretty\n> strongly correlated, and that most or all of the rows with that specific\n> conversation_id are quite far down the created_at ordering, so that the\n> search through the index takes a long time to run. OTOH, with another\n> conversation_id the same plan might run almost instantaneously.\n\n\nThat's right. So I created a composite index, and not only does this make the plan correct, but the planner now chooses a much more efficient plan than the previous index that indexed only on \"conversation_id\":\n\n Limit (cost=0.00..30.80 rows=13 width=12) (actual time=0.042..0.058 rows=13 loops=1) \n Buffers: shared hit=8 \n -> Index Scan using index_comments_on_conversation_id_and_created_at on comments (cost=0.00..14127.83 rows=5964 width=12) (actual time=0.039..0.054 rows=13 loops=1)\n Index Cond: (conversation_id = 3975979) \n Buffers: shared hit=8 \n Total runtime: 0.094 ms \n\n\nIs this because it can get the value of \"created_at\" from the index, or is it because it can know that the index is pre-sorted, or both?\n\nVery impressed that Postgres can use a multi-column index for this. I just assumed, wrongly, that it couldn't. I will have to go review my other tables now and see if they can benefit from multi-column indexes.\n\nThanks!\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 22:31:48 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan with high-cardinality column"
},
{
"msg_contents": "On Friday, February 22, 2013 at 21:47 , Kevin Grittner wrote:\n> I suspect you would be better off without those two indexes, and\n> instead having an index on (conversation_id, created_at). Not just\n> for the query you show, but in general.\n\n\nIndeed, that solved it, thanks!\n \n\n\n> In my experience these problems come largely from the planner not\n> knowing the cost of dealing with each tuple. I see a lot less of\n> this if I raise cpu_tuple_cost to something in the 0.03 to 0.05\n> range.\n\n\nIs this something I can just frob a bit without worrying about it adversely impacting database performance across the board, or should I be very careful and do lots of testing on a staging box first?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 22:34:21 +0100",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad query plan with high-cardinality column"
},
{
"msg_contents": "Alexander Staubo <[email protected]> wrote:\n> On Friday, February 22, 2013 at 21:47 , Kevin Grittner wrote:\n\n>> In my experience these problems come largely from the planner\n>> not knowing the cost of dealing with each tuple. I see a lot\n>> less of this if I raise cpu_tuple_cost to something in the 0.03\n>> to 0.05 range.\n>\n> Is this something I can just frob a bit without worrying about it\n> adversely impacting database performance across the board, or\n> should I be very careful and do lots of testing on a staging box\n> first?\n\nIf possible, I would recommend trying it with the old indexes and\nseeing whether it causes it to choose the better plan. (Of course,\nyou're not going to beat the plan you get with the two-column index\nfor this query, but it might help it better cost the other\nalternatives, which would be a clue that it makes your overall\ncosting model more accurate and would have a more general benefit.)\nYou can play with settings like this in a single session without\naffecting any other sessions.\n\nI always recommend testing a change like this in staging and\nclosely monitoring after deploying to production, to confirm the\noverall benefit and look for any odd cases which might suffer a\nperformance regression. For this particular change, I have never\nseen a negative effect, but I'm sure that it's possible to have a\nscenario where it isn't helpful.\n\nPersonally, I have changed this setting many times and have often\nnoted that 0.02 was not enough to cause choice of an optimal plan,\n0.03 was always enough to do it if adjusting this setting was going\nto help at all, and boosting it to 0.05 never caused further plan\nchanges in the cases I tested. I have never seen such increases\ncause less optimal plan choice.\n\nIf you try this, please post your results.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Feb 2013 14:17:54 -0800 (PST)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan with high-cardinality column"
},
{
"msg_contents": "Alexander Staubo <[email protected]> writes:\n> That's right. So I created a composite index, and not only does this make the plan correct, but the planner now chooses a much more efficient plan than the previous index that indexed only on \"conversation_id\":\n> ...\n> Is this because it can get the value of \"created_at\" from the index, or is it because it can know that the index is pre-sorted, or both?\n\nWhat it knows is that leading index columns that have equality\nconstraints are effectively \"don't-cares\" for ordering purposes.\nSo in general, an indexscan on an index on (x,y) will be seen to\nprovide the ordering required by any of these queries:\n\n\tselect ... order by x;\n\tselect ... order by x,y;\n\tselect ... where x = constant order by x,y;\n\tselect ... where x = constant order by y;\n\nYour query is an example of the last pattern. So the planner sees that\nthe bare indexscan, with no additional sort step, can satisfy the query,\nand then its cost estimate for that with the effects of the LIMIT will\nbe less than for the other possible plans. There's no need to scan and\nthen sort thousands of rows, and there's no need to read through a\nhard-to-guess-but-certainly-large number of irrelevant index entries.\nThe relevant part of the index is a small number of adjacent entries\nthat are already in the right order.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 23 Feb 2013 05:10:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad query plan with high-cardinality column"
}
] |
[
{
"msg_contents": "Hi,\nSince our upgrade of hardware, OS and Postgres we experience server stalls under certain conditions, during that time (up to 2 minutes) all CPUs show 100% system time. All Postgres processes show BIND in top.\nUsually the server only has a load of < 0.5 (12 cores) with up to 30 connections, 200-400 tps\n\nHere is top -H during the stall:\nThreads: 279 total, 25 running, 254 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 0.2 us, 99.8 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n\nThis is under normal circumstances:\nThreads: 274 total, 1 running, 273 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 0.2 us, 0.2 sy, 0.0 ni, 99.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n\niostat shows under 0.3% load on the drives.\n\nThe stalls are mostly reproducible when there is the normal load on the server and then 20-40 new processes start executing SQLs.\nDeactivating HT seemed to have reduced the frequency and length of the stalls.\n\nThe log shows entries for slow BINDs (8 seconds):\n... LOG: duration: 8452.654 ms bind pdo_stmt_00000001: SELECT [20 columns selected] FROM users WHERE users.USERID=$1 LIMIT 1\n\nI have tried to create a testcase, but even starting 200 client processes that execute prepared statements does not reproduce this behaviour on a nearly idle server, only under normal workload does it stall.\n\nHardware details:\n2x Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz\n64 GB RAM\n\nPostgres version: 9.2.2 and 9.2.3\n\nLinux: OpenSUSE 12.2 with Kernel 3.4.6\n\nPostgres config:\nmax_connections = 200\neffective_io_concurrency = 3\nmax_wal_senders = 2\nwal_keep_segments = 2048\nmax_locks_per_transaction = 500\ndefault_statistics_target = 100\ncheckpoint_completion_target = 0.9\nmaintenance_work_mem = 1GB\neffective_cache_size = 60GB\nwork_mem = 384MB\nwal_buffers = 8MB\ncheckpoint_segments = 64\nshared_buffers = 15GB\n\n\nThis might be related to this topic: http://www.postgresql.org/message-id/CANQNgOquOGH7AkqW6ObPafrgxv=J3WsiZg-NgVvbki-qYpoY7Q@mail.gmail.com (Poor performance after update from SLES11 SP1 to SP2)\nI believe the old server was OpenSUSE 11.x.\n\n\nThanks for any hint on how to fix this or diagnose the problem.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Feb 2013 00:08:03 +1000",
"msg_from": "Andre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Server stalls, all CPU 100% system time"
},
{
"msg_contents": "and your /etc/sysctl.conf is?\n\nCheers\nBèrto\n\nOn 24 February 2013 14:08, Andre <[email protected]> wrote:\n> Hi,\n> Since our upgrade of hardware, OS and Postgres we experience server stalls\n> under certain conditions, during that time (up to 2 minutes) all CPUs show\n> 100% system time. All Postgres processes show BIND in top.\n> Usually the server only has a load of < 0.5 (12 cores) with up to 30\n> connections, 200-400 tps\n>\n> Here is top -H during the stall:\n> Threads: 279 total, 25 running, 254 sleeping, 0 stopped, 0 zombie\n> %Cpu(s): 0.2 us, 99.8 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0\n> st\n>\n> This is under normal circumstances:\n> Threads: 274 total, 1 running, 273 sleeping, 0 stopped, 0 zombie\n> %Cpu(s): 0.2 us, 0.2 sy, 0.0 ni, 99.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0\n> st\n>\n> iostat shows under 0.3% load on the drives.\n>\n> The stalls are mostly reproducible when there is the normal load on the\n> server and then 20-40 new processes start executing SQLs.\n> Deactivating HT seemed to have reduced the frequency and length of the\n> stalls.\n>\n> The log shows entries for slow BINDs (8 seconds):\n> ... LOG: duration: 8452.654 ms bind pdo_stmt_00000001: SELECT [20 columns\n> selected] FROM users WHERE users.USERID=$1 LIMIT 1\n>\n> I have tried to create a testcase, but even starting 200 client processes\n> that execute prepared statements does not reproduce this behaviour on a\n> nearly idle server, only under normal workload does it stall.\n>\n> Hardware details:\n> 2x Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz\n> 64 GB RAM\n>\n> Postgres version: 9.2.2 and 9.2.3\n>\n> Linux: OpenSUSE 12.2 with Kernel 3.4.6\n>\n> Postgres config:\n> max_connections = 200\n> effective_io_concurrency = 3\n> max_wal_senders = 2\n> wal_keep_segments = 2048\n> max_locks_per_transaction = 500\n> default_statistics_target = 100\n> checkpoint_completion_target = 0.9\n> maintenance_work_mem = 1GB\n> effective_cache_size = 60GB\n> work_mem = 384MB\n> wal_buffers = 8MB\n> checkpoint_segments = 64\n> shared_buffers = 15GB\n>\n>\n> This might be related to this topic:\n> http://www.postgresql.org/message-id/CANQNgOquOGH7AkqW6ObPafrgxv=J3WsiZg-NgVvbki-qYpoY7Q@mail.gmail.com\n> (Poor performance after update from SLES11 SP1 to SP2)\n> I believe the old server was OpenSUSE 11.x.\n>\n>\n> Thanks for any hint on how to fix this or diagnose the problem.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \n==============================\nIf Pac-Man had affected us as kids, we'd all be running around in a\ndarkened room munching pills and listening to repetitive music.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 24 Feb 2013 14:13:13 +0000",
"msg_from": "=?UTF-8?B?QsOocnRvIMOrZCBTw6hyYQ==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server stalls, all CPU 100% system time"
},
{
"msg_contents": "On 25/02/2013 12:13 AM, Bèrto ëd Sèra wrote:\n> and your /etc/sysctl.conf is?\n>\n\n/etc/sysctl.conf only had unrelated options set, here is the output of sysctl -a:\n\nabi.vsyscall32 = 1\ndebug.exception-trace = 1\ndev.hpet.max-user-freq = 64\nkernel.acct = 4 2 30\nkernel.acpi_video_flags = 0\nkernel.auto_msgmni = 1\nkernel.blk_iopoll = 1\nkernel.cad_pid = 1\nkernel.cap_last_cap = 35\nkernel.compat-log = 1\nkernel.core_pattern = core\nkernel.core_pipe_limit = 0\nkernel.core_uses_pid = 0\nkernel.ctrl-alt-del = 0\nkernel.dmesg_restrict = 0\nkernel.hung_task_check_count = 4194304\nkernel.hung_task_panic = 0\nkernel.hung_task_timeout_secs = 0\nkernel.hung_task_warnings = 10\nkernel.io_delay_type = 0\nkernel.keys.gc_delay = 300\nkernel.keys.maxbytes = 20000\nkernel.keys.maxkeys = 200\nkernel.keys.root_maxbytes = 20000\nkernel.keys.root_maxkeys = 200\nkernel.kptr_restrict = 0\nkernel.kstack_depth_to_print = 12\nkernel.latencytop = 0\nkernel.max_lock_depth = 1024\nkernel.msgmax = 65536\nkernel.msgmnb = 65536\nkernel.msgmni = 32768\nkernel.ngroups_max = 65536\nkernel.nmi_watchdog = 1\nkernel.ns_last_pid = 14714\nkernel.osrelease = 3.4.6-2.10-desktop\nkernel.ostype = Linux\nkernel.overflowgid = 65534\nkernel.overflowuid = 65534\nkernel.panic = 0\nkernel.panic_on_io_nmi = 0\nkernel.panic_on_oops = 0\nkernel.panic_on_unrecovered_nmi = 0\nkernel.perf_event_max_sample_rate = 100000\nkernel.perf_event_mlock_kb = 516\nkernel.perf_event_paranoid = 1\nkernel.pid_max = 32768\nkernel.poweroff_cmd = /sbin/poweroff\nkernel.print-fatal-signals = 0\nkernel.printk = 1 4 1 7\nkernel.printk_delay = 0\nkernel.printk_ratelimit = 5\nkernel.printk_ratelimit_burst = 10\nkernel.random.boot_id = eaaed4b5-58cb-4f1d-be4a-62475d6ba312\nkernel.random.entropy_avail = 2488\nkernel.random.poolsize = 4096\nkernel.random.read_wakeup_threshold = 64\nkernel.random.uuid = bc40e670-c507-4b31-a310-f80eabda46b7\nkernel.random.write_wakeup_threshold = 1024\nkernel.randomize_va_space = 2\nkernel.sched_autogroup_enabled = 1\nkernel.sched_cfs_bandwidth_slice_us = 5000\nkernel.sched_child_runs_first = 0\nkernel.sched_domain.cpu0.domain0.busy_factor = 64\nkernel.sched_domain.cpu0.domain0.busy_idx = 2\nkernel.sched_domain.cpu0.domain0.cache_nice_tries = 1\nkernel.sched_domain.cpu0.domain0.flags = 4655\nkernel.sched_domain.cpu0.domain0.forkexec_idx = 0\nkernel.sched_domain.cpu0.domain0.idle_idx = 0\nkernel.sched_domain.cpu0.domain0.imbalance_pct = 125\nkernel.sched_domain.cpu0.domain0.max_interval = 4\nkernel.sched_domain.cpu0.domain0.min_interval = 1\nkernel.sched_domain.cpu0.domain0.name = MC\nkernel.sched_domain.cpu0.domain0.newidle_idx = 0\nkernel.sched_domain.cpu0.domain0.wake_idx = 0\n...\nkernel.sched_latency_ns = 24000000\nkernel.sched_migration_cost = 500000\nkernel.sched_min_granularity_ns = 3000000\nkernel.sched_nr_migrate = 32\nkernel.sched_rt_period_us = 1000000\nkernel.sched_rt_runtime_us = 950000\nkernel.sched_shares_window = 10000000\nkernel.sched_time_avg = 1000\nkernel.sched_tunable_scaling = 1\nkernel.sched_wakeup_granularity_ns = 4000000\nkernel.sem = 250 256000 32 1024\nkernel.shm_rmid_forced = 0\nkernel.shmall = 1152921504606846720\nkernel.shmmax = 18446744073709551615\nkernel.shmmni = 4096\nkernel.softlockup_panic = 0\nkernel.suid_dumpable = 0\nkernel.sysrq = 0\nkernel.tainted = 1536\nkernel.threads-max = 1032123\nkernel.timer_migration = 1\nkernel.unknown_nmi_panic = 0\nkernel.usermodehelper.bset = 4294967295 4294967295\nkernel.usermodehelper.inheritable = 4294967295 4294967295\nkernel.version = #1 SMP PREEMPT Thu Jul 26 09:36:26 UTC 2012 (641c197)\nkernel.watchdog = 1\nkernel.watchdog_thresh = 10\nvm.block_dump = 0\nvm.dirty_background_bytes = 0\nvm.dirty_background_ratio = 10\nvm.dirty_bytes = 0\nvm.dirty_expire_centisecs = 3000\nvm.dirty_ratio = 20\nvm.dirty_writeback_centisecs = 500\nvm.drop_caches = 0\nvm.extfrag_threshold = 500\nvm.hugepages_treat_as_movable = 0\nvm.hugetlb_shm_group = 0\nvm.laptop_mode = 0\nvm.legacy_va_layout = 0\nvm.lowmem_reserve_ratio = 256 256 32\nvm.max_map_count = 65530\nvm.memory_failure_early_kill = 0\nvm.memory_failure_recovery = 1\nvm.min_free_kbytes = 67584\nvm.min_slab_ratio = 5\nvm.min_unmapped_ratio = 1\nvm.mmap_min_addr = 65536\nvm.nr_hugepages = 0\nvm.nr_hugepages_mempolicy = 0\nvm.nr_overcommit_hugepages = 0\nvm.nr_pdflush_threads = 0\nvm.numa_zonelist_order = default\nvm.oom_dump_tasks = 1\nvm.oom_kill_allocating_task = 0\nvm.overcommit_memory = 0\nvm.overcommit_ratio = 50\nvm.page-cluster = 3\nvm.panic_on_oom = 0\nvm.percpu_pagelist_fraction = 0\nvm.scan_unevictable_pages = 0\nvm.stat_interval = 1\nvm.swappiness = 60\nvm.vfs_cache_pressure = 100\nvm.zone_reclaim_mode = 0\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Feb 2013 00:39:34 +1000",
"msg_from": "Andre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server stalls, all CPU 100% system time"
},
{
"msg_contents": "Andre <[email protected]> writes:\n> Since our upgrade of hardware, OS and Postgres we experience server stalls under certain conditions, during that time (up to 2 minutes) all CPUs show 100% system time. All Postgres processes show BIND in top.\n\nOdd. I wonder if you are seeing some variant of the old context swap\nstorm problem. The \"99.8% system time\" reading is suggestive but hardly\nconclusive. Does top's report of context swap rate go to the moon?\n\nIt would be interesting to strace a few of the server processes while\none of these events is happening, too.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 24 Feb 2013 09:45:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server stalls, all CPU 100% system time"
},
{
"msg_contents": "On Sun, Feb 24, 2013 at 7:08 AM, Andre <[email protected]> wrote:\n> Hi,\n> Since our upgrade of hardware, OS and Postgres we experience server stalls\n> under certain conditions, during that time (up to 2 minutes) all CPUs show\n> 100% system time. All Postgres processes show BIND in top.\n> Usually the server only has a load of < 0.5 (12 cores) with up to 30\n> connections, 200-400 tps\n>\n> Here is top -H during the stall:\n> Threads: 279 total, 25 running, 254 sleeping, 0 stopped, 0 zombie\n> %Cpu(s): 0.2 us, 99.8 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0\n> st\n\nIt might be useful to see a page of top output as well. Further turn\non sysstat data collection so you do some post mortem work.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 24 Feb 2013 09:43:35 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server stalls, all CPU 100% system time"
},
{
"msg_contents": "On 25/02/2013 12:45 AM, Tom Lane wrote:\n> Odd. I wonder if you are seeing some variant of the old context swap storm problem. The \"99.8% system time\" reading is suggestive but hardly conclusive. Does top's report of context swap rate go to the moon? It would be interesting to strace a few of the server processes while one of these events is happening, too. regards, tom lane \n\nI used vmstat to look at the context swaps, they were around 5k and 15k interrupts per second.\nI thought that it was to many interrupts and after a bit of search a website mentioned that the network card driver could cause that. After updating kernel and the driver the stalling is not reproducible any more.\n\nWeird enough, when I load test the server now I have 35k interrupts and 250k context switches, but no problems at all.\n\nThanks for pointing me into the right direction.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Feb 2013 22:53:17 +1000",
"msg_from": "Andre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server stalls, all CPU 100% system time"
},
{
"msg_contents": "Andre,\n\nPlease see the related thread on this list, \"High CPU usage / load\naverage after upgrading to Ubuntu 12.04\". You may be experiencing some\nof the same issues. General perspective seems to be that kernels 3.0\nthrough 3.4 have serious performance issues.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Feb 2013 15:29:12 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server stalls, all CPU 100% system time"
},
{
"msg_contents": "On Tue, Feb 26, 2013 at 4:29 PM, Josh Berkus <[email protected]> wrote:\n> Andre,\n>\n> Please see the related thread on this list, \"High CPU usage / load\n> average after upgrading to Ubuntu 12.04\". You may be experiencing some\n> of the same issues. General perspective seems to be that kernels 3.0\n> through 3.4 have serious performance issues.\n\nSomeone commented they think it might be related to this kernel bug:\n\nhttps://lkml.org/lkml/2012/10/9/210\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Feb 2013 18:02:49 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server stalls, all CPU 100% system time"
},
{
"msg_contents": "On 27/02/2013 9:29 AM, Josh Berkus wrote:\n> Andre,\n>\n> Please see the related thread on this list, \"High CPU usage / load\n> average after upgrading to Ubuntu 12.04\". You may be experiencing some\n> of the same issues. General perspective seems to be that kernels 3.0\n> through 3.4 have serious performance issues.\n>\n>\nJosh,\n\nI saw that thread, but it did not appear to be the same symptoms that I had. Where they have a high load average, I only saw spikes during which the server was unresponsive. During that time the load would jump to 50-70 (on 24 cores).\nAnyway, after upgrading the Kernel to 3.4.28 and the latest Intel network card driver the problem seems to be gone.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 Feb 2013 20:52:44 +1000",
"msg_from": "Andre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Server stalls, all CPU 100% system time"
},
{
"msg_contents": "\n> Someone commented they think it might be related to this kernel bug:\n> \n> https://lkml.org/lkml/2012/10/9/210\n> \n\nWe have some evidence that that is the case.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 Feb 2013 15:07:27 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server stalls, all CPU 100% system time"
}
] |
[
{
"msg_contents": "Howdy, the query generator in my app sometimes creates redundant\nfilters of the form:\n\nproject_id IN ( <list of projects user has permission to see> ) AND\nproject_id = <single project user is looking at >\n\n... and this is leading to a bad estimate (and thus a bad plan) on a\nfew complex queries. I've included simplified examples below. This\nserver is running 9.0.10 and the statistics target has been updated to\n1000 on the project_id column. I've also loaded the one table into a\n9.2.2 instance and replicated the behaviour.\n\nI can change how the query is being generated, but I'm curious why I'm\ngetting a bad estimate. Is this an expected result?\n\nThanks!\n\nMatt\n\n=============\n\n1) Filter on project_id only, row estimate for Bitmap Index Scan quite good.\n\nexplain (analyze,buffers) select count(id) from versions WHERE project_id=115;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1218111.01..1218111.02 rows=1 width=4) (actual\ntime=1531.341..1531.342 rows=1 loops=1)\n Buffers: shared hit=452619\n -> Bitmap Heap Scan on versions (cost=34245.06..1215254.86\nrows=1142461 width=4) (actual time=148.394..1453.383 rows=1114197\nloops=1)\n Recheck Cond: (project_id = 115)\n Buffers: shared hit=452619\n -> Bitmap Index Scan on versions_project_id\n(cost=0.00..33959.45 rows=1142461 width=0) (actual\ntime=139.709..139.709 rows=1116037 loops=1)\n Index Cond: (project_id = 115)\n Buffers: shared hit=22077\n Total runtime: 1531.399 ms\n\n2) Filter on project_id IN () AND project_id. Row estimate is ~10x lower.\n\nexplain (analyze,buffers) select count(id) from versions WHERE\nproject_id IN (80,115) AND project_id=115;;\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=327066.18..327066.19 rows=1 width=4) (actual\ntime=1637.889..1637.889 rows=1 loops=1)\n Buffers: shared hit=458389\n -> Bitmap Heap Scan on versions (cost=3546.56..326793.17\nrows=109201 width=4) (actual time=155.107..1557.453 rows=1114180\nloops=1)\n Recheck Cond: ((project_id = ANY ('{80,115}'::integer[])) AND\n(project_id = 115))\n Buffers: shared hit=458389\n -> Bitmap Index Scan on versions_project_id\n(cost=0.00..3519.26 rows=109201 width=0) (actual time=145.502..145.502\nrows=1125436 loops=1)\n Index Cond: ((project_id = ANY ('{80,115}'::integer[]))\nAND (project_id = 115))\n Buffers: shared hit=22076\n Total runtime: 1637.941 ms\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Feb 2013 11:35:45 -0800",
"msg_from": "Matt Daw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Estimation question..."
},
{
"msg_contents": "Quick follow up... I've found that the row estimate in:\n\nexplain select count(id) from versions where project_id IN (80,115)\nAND project_id=115;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Aggregate (cost=178572.75..178572.76 rows=1 width=4)\n -> Index Scan using dneg_versions_project_id on versions\n(cost=0.00..178306.94 rows=106323 width=4)\n Index Cond: ((project_id = ANY ('{80,115}'::integer[])) AND\n(project_id = 115))\n\n\n... is the sum of two other estimates, seen when rewriting the query\nusing OR instead of IN:\n\n\nexplain select count(id) from versions where (project_id = 80 OR\nproject_id = 115) AND project_id=115;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=305896.95..305896.96 rows=1 width=4)\n -> Bitmap Heap Scan on versions (cost=2315.08..305632.00\nrows=105980 width=4)\n Recheck Cond: (((project_id = 80) AND (project_id = 115)) OR\n((project_id = 115) AND (project_id = 115)))\n -> BitmapOr (cost=2315.08..2315.08 rows=106323 width=0)\n -> Bitmap Index Scan on dneg_versions_project_id\n(cost=0.00..94.52 rows=3709 width=0)\n Index Cond: ((project_id = 80) AND (project_id = 115))\n -> Bitmap Index Scan on dneg_versions_project_id\n(cost=0.00..2167.57 rows=102614 width=0)\n Index Cond: ((project_id = 115) AND (project_id = 115))\n\n106323 = 3709 + 102614\n\nLooks like the underlying problem is that the estimate for\n((project_id = 115) AND (project_id = 115)) doesn't end up being the\nsame as (project_id=115) on its own.\n\nMatt\n\nOn Tue, Feb 26, 2013 at 11:35 AM, Matt Daw <[email protected]> wrote:\n> Howdy, the query generator in my app sometimes creates redundant\n> filters of the form:\n>\n> project_id IN ( <list of projects user has permission to see> ) AND\n> project_id = <single project user is looking at >\n>\n> ... and this is leading to a bad estimate (and thus a bad plan) on a\n> few complex queries. I've included simplified examples below. This\n> server is running 9.0.10 and the statistics target has been updated to\n> 1000 on the project_id column. I've also loaded the one table into a\n> 9.2.2 instance and replicated the behaviour.\n>\n> I can change how the query is being generated, but I'm curious why I'm\n> getting a bad estimate. Is this an expected result?\n>\n> Thanks!\n>\n> Matt\n>\n> =============\n>\n> 1) Filter on project_id only, row estimate for Bitmap Index Scan quite good.\n>\n> explain (analyze,buffers) select count(id) from versions WHERE project_id=115;\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1218111.01..1218111.02 rows=1 width=4) (actual\n> time=1531.341..1531.342 rows=1 loops=1)\n> Buffers: shared hit=452619\n> -> Bitmap Heap Scan on versions (cost=34245.06..1215254.86\n> rows=1142461 width=4) (actual time=148.394..1453.383 rows=1114197\n> loops=1)\n> Recheck Cond: (project_id = 115)\n> Buffers: shared hit=452619\n> -> Bitmap Index Scan on versions_project_id\n> (cost=0.00..33959.45 rows=1142461 width=0) (actual\n> time=139.709..139.709 rows=1116037 loops=1)\n> Index Cond: (project_id = 115)\n> Buffers: shared hit=22077\n> Total runtime: 1531.399 ms\n>\n> 2) Filter on project_id IN () AND project_id. Row estimate is ~10x lower.\n>\n> explain (analyze,buffers) select count(id) from versions WHERE\n> project_id IN (80,115) AND project_id=115;;\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=327066.18..327066.19 rows=1 width=4) (actual\n> time=1637.889..1637.889 rows=1 loops=1)\n> Buffers: shared hit=458389\n> -> Bitmap Heap Scan on versions (cost=3546.56..326793.17\n> rows=109201 width=4) (actual time=155.107..1557.453 rows=1114180\n> loops=1)\n> Recheck Cond: ((project_id = ANY ('{80,115}'::integer[])) AND\n> (project_id = 115))\n> Buffers: shared hit=458389\n> -> Bitmap Index Scan on versions_project_id\n> (cost=0.00..3519.26 rows=109201 width=0) (actual time=145.502..145.502\n> rows=1125436 loops=1)\n> Index Cond: ((project_id = ANY ('{80,115}'::integer[]))\n> AND (project_id = 115))\n> Buffers: shared hit=22076\n> Total runtime: 1637.941 ms\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 Feb 2013 09:08:45 -0800",
"msg_from": "Matt Daw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Estimation question..."
},
{
"msg_contents": "I get it now... project_id=115 has a frequency of 0.09241 in pg_stats.\nSo if ((project_id = 115) AND (project_id = 115)) is considered as two\nindependent conditions, the row estimate ends up being 0.09241 *\n0.09241 * 1.20163e+07 (reltuples from pg_class) = 102614.\n\nhttp://www.postgresql.org/docs/9.0/static/row-estimation-examples.html\nwas a big help.\n\nMatt\n\nOn Wed, Feb 27, 2013 at 9:08 AM, Matt Daw <[email protected]> wrote:\n> Quick follow up... I've found that the row estimate in:\n>\n> explain select count(id) from versions where project_id IN (80,115)\n> AND project_id=115;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------\n> Aggregate (cost=178572.75..178572.76 rows=1 width=4)\n> -> Index Scan using dneg_versions_project_id on versions\n> (cost=0.00..178306.94 rows=106323 width=4)\n> Index Cond: ((project_id = ANY ('{80,115}'::integer[])) AND\n> (project_id = 115))\n>\n>\n> ... is the sum of two other estimates, seen when rewriting the query\n> using OR instead of IN:\n>\n>\n> explain select count(id) from versions where (project_id = 80 OR\n> project_id = 115) AND project_id=115;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=305896.95..305896.96 rows=1 width=4)\n> -> Bitmap Heap Scan on versions (cost=2315.08..305632.00\n> rows=105980 width=4)\n> Recheck Cond: (((project_id = 80) AND (project_id = 115)) OR\n> ((project_id = 115) AND (project_id = 115)))\n> -> BitmapOr (cost=2315.08..2315.08 rows=106323 width=0)\n> -> Bitmap Index Scan on dneg_versions_project_id\n> (cost=0.00..94.52 rows=3709 width=0)\n> Index Cond: ((project_id = 80) AND (project_id = 115))\n> -> Bitmap Index Scan on dneg_versions_project_id\n> (cost=0.00..2167.57 rows=102614 width=0)\n> Index Cond: ((project_id = 115) AND (project_id = 115))\n>\n> 106323 = 3709 + 102614\n>\n> Looks like the underlying problem is that the estimate for\n> ((project_id = 115) AND (project_id = 115)) doesn't end up being the\n> same as (project_id=115) on its own.\n>\n> Matt\n>\n> On Tue, Feb 26, 2013 at 11:35 AM, Matt Daw <[email protected]> wrote:\n>> Howdy, the query generator in my app sometimes creates redundant\n>> filters of the form:\n>>\n>> project_id IN ( <list of projects user has permission to see> ) AND\n>> project_id = <single project user is looking at >\n>>\n>> ... and this is leading to a bad estimate (and thus a bad plan) on a\n>> few complex queries. I've included simplified examples below. This\n>> server is running 9.0.10 and the statistics target has been updated to\n>> 1000 on the project_id column. I've also loaded the one table into a\n>> 9.2.2 instance and replicated the behaviour.\n>>\n>> I can change how the query is being generated, but I'm curious why I'm\n>> getting a bad estimate. Is this an expected result?\n>>\n>> Thanks!\n>>\n>> Matt\n>>\n>> =============\n>>\n>> 1) Filter on project_id only, row estimate for Bitmap Index Scan quite good.\n>>\n>> explain (analyze,buffers) select count(id) from versions WHERE project_id=115;\n>>\n>> QUERY PLAN\n>> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=1218111.01..1218111.02 rows=1 width=4) (actual\n>> time=1531.341..1531.342 rows=1 loops=1)\n>> Buffers: shared hit=452619\n>> -> Bitmap Heap Scan on versions (cost=34245.06..1215254.86\n>> rows=1142461 width=4) (actual time=148.394..1453.383 rows=1114197\n>> loops=1)\n>> Recheck Cond: (project_id = 115)\n>> Buffers: shared hit=452619\n>> -> Bitmap Index Scan on versions_project_id\n>> (cost=0.00..33959.45 rows=1142461 width=0) (actual\n>> time=139.709..139.709 rows=1116037 loops=1)\n>> Index Cond: (project_id = 115)\n>> Buffers: shared hit=22077\n>> Total runtime: 1531.399 ms\n>>\n>> 2) Filter on project_id IN () AND project_id. Row estimate is ~10x lower.\n>>\n>> explain (analyze,buffers) select count(id) from versions WHERE\n>> project_id IN (80,115) AND project_id=115;;\n>>\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=327066.18..327066.19 rows=1 width=4) (actual\n>> time=1637.889..1637.889 rows=1 loops=1)\n>> Buffers: shared hit=458389\n>> -> Bitmap Heap Scan on versions (cost=3546.56..326793.17\n>> rows=109201 width=4) (actual time=155.107..1557.453 rows=1114180\n>> loops=1)\n>> Recheck Cond: ((project_id = ANY ('{80,115}'::integer[])) AND\n>> (project_id = 115))\n>> Buffers: shared hit=458389\n>> -> Bitmap Index Scan on versions_project_id\n>> (cost=0.00..3519.26 rows=109201 width=0) (actual time=145.502..145.502\n>> rows=1125436 loops=1)\n>> Index Cond: ((project_id = ANY ('{80,115}'::integer[]))\n>> AND (project_id = 115))\n>> Buffers: shared hit=22076\n>> Total runtime: 1637.941 ms\n\n\nOn Wed, Feb 27, 2013 at 9:08 AM, Matt Daw <[email protected]> wrote:\n> Quick follow up... I've found that the row estimate in:\n>\n> explain select count(id) from versions where project_id IN (80,115)\n> AND project_id=115;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------\n> Aggregate (cost=178572.75..178572.76 rows=1 width=4)\n> -> Index Scan using dneg_versions_project_id on versions\n> (cost=0.00..178306.94 rows=106323 width=4)\n> Index Cond: ((project_id = ANY ('{80,115}'::integer[])) AND\n> (project_id = 115))\n>\n>\n> ... is the sum of two other estimates, seen when rewriting the query\n> using OR instead of IN:\n>\n>\n> explain select count(id) from versions where (project_id = 80 OR\n> project_id = 115) AND project_id=115;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=305896.95..305896.96 rows=1 width=4)\n> -> Bitmap Heap Scan on versions (cost=2315.08..305632.00\n> rows=105980 width=4)\n> Recheck Cond: (((project_id = 80) AND (project_id = 115)) OR\n> ((project_id = 115) AND (project_id = 115)))\n> -> BitmapOr (cost=2315.08..2315.08 rows=106323 width=0)\n> -> Bitmap Index Scan on dneg_versions_project_id\n> (cost=0.00..94.52 rows=3709 width=0)\n> Index Cond: ((project_id = 80) AND (project_id = 115))\n> -> Bitmap Index Scan on dneg_versions_project_id\n> (cost=0.00..2167.57 rows=102614 width=0)\n> Index Cond: ((project_id = 115) AND (project_id = 115))\n>\n> 106323 = 3709 + 102614\n>\n> Looks like the underlying problem is that the estimate for\n> ((project_id = 115) AND (project_id = 115)) doesn't end up being the\n> same as (project_id=115) on its own.\n>\n> Matt\n>\n> On Tue, Feb 26, 2013 at 11:35 AM, Matt Daw <[email protected]> wrote:\n>> Howdy, the query generator in my app sometimes creates redundant\n>> filters of the form:\n>>\n>> project_id IN ( <list of projects user has permission to see> ) AND\n>> project_id = <single project user is looking at >\n>>\n>> ... and this is leading to a bad estimate (and thus a bad plan) on a\n>> few complex queries. I've included simplified examples below. This\n>> server is running 9.0.10 and the statistics target has been updated to\n>> 1000 on the project_id column. I've also loaded the one table into a\n>> 9.2.2 instance and replicated the behaviour.\n>>\n>> I can change how the query is being generated, but I'm curious why I'm\n>> getting a bad estimate. Is this an expected result?\n>>\n>> Thanks!\n>>\n>> Matt\n>>\n>> =============\n>>\n>> 1) Filter on project_id only, row estimate for Bitmap Index Scan quite good.\n>>\n>> explain (analyze,buffers) select count(id) from versions WHERE project_id=115;\n>>\n>> QUERY PLAN\n>> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=1218111.01..1218111.02 rows=1 width=4) (actual\n>> time=1531.341..1531.342 rows=1 loops=1)\n>> Buffers: shared hit=452619\n>> -> Bitmap Heap Scan on versions (cost=34245.06..1215254.86\n>> rows=1142461 width=4) (actual time=148.394..1453.383 rows=1114197\n>> loops=1)\n>> Recheck Cond: (project_id = 115)\n>> Buffers: shared hit=452619\n>> -> Bitmap Index Scan on versions_project_id\n>> (cost=0.00..33959.45 rows=1142461 width=0) (actual\n>> time=139.709..139.709 rows=1116037 loops=1)\n>> Index Cond: (project_id = 115)\n>> Buffers: shared hit=22077\n>> Total runtime: 1531.399 ms\n>>\n>> 2) Filter on project_id IN () AND project_id. Row estimate is ~10x lower.\n>>\n>> explain (analyze,buffers) select count(id) from versions WHERE\n>> project_id IN (80,115) AND project_id=115;;\n>>\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=327066.18..327066.19 rows=1 width=4) (actual\n>> time=1637.889..1637.889 rows=1 loops=1)\n>> Buffers: shared hit=458389\n>> -> Bitmap Heap Scan on versions (cost=3546.56..326793.17\n>> rows=109201 width=4) (actual time=155.107..1557.453 rows=1114180\n>> loops=1)\n>> Recheck Cond: ((project_id = ANY ('{80,115}'::integer[])) AND\n>> (project_id = 115))\n>> Buffers: shared hit=458389\n>> -> Bitmap Index Scan on versions_project_id\n>> (cost=0.00..3519.26 rows=109201 width=0) (actual time=145.502..145.502\n>> rows=1125436 loops=1)\n>> Index Cond: ((project_id = ANY ('{80,115}'::integer[]))\n>> AND (project_id = 115))\n>> Buffers: shared hit=22076\n>> Total runtime: 1637.941 ms\n\nOn Wed, Feb 27, 2013 at 9:08 AM, Matt Daw <[email protected]> wrote:\n> Quick follow up... I've found that the row estimate in:\n>\n> explain select count(id) from versions where project_id IN (80,115)\n> AND project_id=115;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------\n> Aggregate (cost=178572.75..178572.76 rows=1 width=4)\n> -> Index Scan using dneg_versions_project_id on versions\n> (cost=0.00..178306.94 rows=106323 width=4)\n> Index Cond: ((project_id = ANY ('{80,115}'::integer[])) AND\n> (project_id = 115))\n>\n>\n> ... is the sum of two other estimates, seen when rewriting the query\n> using OR instead of IN:\n>\n>\n> explain select count(id) from versions where (project_id = 80 OR\n> project_id = 115) AND project_id=115;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=305896.95..305896.96 rows=1 width=4)\n> -> Bitmap Heap Scan on versions (cost=2315.08..305632.00\n> rows=105980 width=4)\n> Recheck Cond: (((project_id = 80) AND (project_id = 115)) OR\n> ((project_id = 115) AND (project_id = 115)))\n> -> BitmapOr (cost=2315.08..2315.08 rows=106323 width=0)\n> -> Bitmap Index Scan on dneg_versions_project_id\n> (cost=0.00..94.52 rows=3709 width=0)\n> Index Cond: ((project_id = 80) AND (project_id = 115))\n> -> Bitmap Index Scan on dneg_versions_project_id\n> (cost=0.00..2167.57 rows=102614 width=0)\n> Index Cond: ((project_id = 115) AND (project_id = 115))\n>\n> 106323 = 3709 + 102614\n>\n> Looks like the underlying problem is that the estimate for\n> ((project_id = 115) AND (project_id = 115)) doesn't end up being the\n> same as (project_id=115) on its own.\n>\n> Matt\n>\n> On Tue, Feb 26, 2013 at 11:35 AM, Matt Daw <[email protected]> wrote:\n>> Howdy, the query generator in my app sometimes creates redundant\n>> filters of the form:\n>>\n>> project_id IN ( <list of projects user has permission to see> ) AND\n>> project_id = <single project user is looking at >\n>>\n>> ... and this is leading to a bad estimate (and thus a bad plan) on a\n>> few complex queries. I've included simplified examples below. This\n>> server is running 9.0.10 and the statistics target has been updated to\n>> 1000 on the project_id column. I've also loaded the one table into a\n>> 9.2.2 instance and replicated the behaviour.\n>>\n>> I can change how the query is being generated, but I'm curious why I'm\n>> getting a bad estimate. Is this an expected result?\n>>\n>> Thanks!\n>>\n>> Matt\n>>\n>> =============\n>>\n>> 1) Filter on project_id only, row estimate for Bitmap Index Scan quite good.\n>>\n>> explain (analyze,buffers) select count(id) from versions WHERE project_id=115;\n>>\n>> QUERY PLAN\n>> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=1218111.01..1218111.02 rows=1 width=4) (actual\n>> time=1531.341..1531.342 rows=1 loops=1)\n>> Buffers: shared hit=452619\n>> -> Bitmap Heap Scan on versions (cost=34245.06..1215254.86\n>> rows=1142461 width=4) (actual time=148.394..1453.383 rows=1114197\n>> loops=1)\n>> Recheck Cond: (project_id = 115)\n>> Buffers: shared hit=452619\n>> -> Bitmap Index Scan on versions_project_id\n>> (cost=0.00..33959.45 rows=1142461 width=0) (actual\n>> time=139.709..139.709 rows=1116037 loops=1)\n>> Index Cond: (project_id = 115)\n>> Buffers: shared hit=22077\n>> Total runtime: 1531.399 ms\n>>\n>> 2) Filter on project_id IN () AND project_id. Row estimate is ~10x lower.\n>>\n>> explain (analyze,buffers) select count(id) from versions WHERE\n>> project_id IN (80,115) AND project_id=115;;\n>>\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Aggregate (cost=327066.18..327066.19 rows=1 width=4) (actual\n>> time=1637.889..1637.889 rows=1 loops=1)\n>> Buffers: shared hit=458389\n>> -> Bitmap Heap Scan on versions (cost=3546.56..326793.17\n>> rows=109201 width=4) (actual time=155.107..1557.453 rows=1114180\n>> loops=1)\n>> Recheck Cond: ((project_id = ANY ('{80,115}'::integer[])) AND\n>> (project_id = 115))\n>> Buffers: shared hit=458389\n>> -> Bitmap Index Scan on versions_project_id\n>> (cost=0.00..3519.26 rows=109201 width=0) (actual time=145.502..145.502\n>> rows=1125436 loops=1)\n>> Index Cond: ((project_id = ANY ('{80,115}'::integer[]))\n>> AND (project_id = 115))\n>> Buffers: shared hit=22076\n>> Total runtime: 1637.941 ms\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Feb 2013 08:31:45 -0800",
"msg_from": "Matt Daw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Estimation question..."
}
] |
[
{
"msg_contents": "I took some time to figure out a reasonable tuning for my fresh 9.2.3\ninstallation when I've noticed the following:\n\n[costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\npostgres -i -s 1\n...\n100000 tuples done.\n...\nvacuum...done.\n[costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\npostgres -c 32 -t 5000\n...\ntps = 245.628075 (including connections establishing)\ntps = 245.697421 (excluding connections establishing)\n...\n[costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\npostgres -i -s 100\n...\n10000000 tuples done.\n...\nvacuum...done.\n[costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\npostgres -c 32 -t 5000\n...\ntps = 1125.035567 (including connections establishing)\ntps = 1126.490634 (excluding connections establishing)\n\n32 connections makes a comfortable load for the 8 core 4GB production\nserver, a rather old machine. I kept testing for almost two days with\nvarious configuration parameters. In the beginning I was warned to\nincrease the checkpoint_segments, which is now 32. The results were\nconsistent and always showing small scale test (-s 1) at about 245-248\ntps while big scale test (-s 100) at least 4 and up to 7 times\nbetter.\n\nAccording to top, at small scale tests, server processes are doing a\nlot of UPDATE waiting. A \"select relation::regclass, * from pg_locks\nwhere not granted\" showed frequent contention on tellers rows.\n\nFirst, I've got no good explanation for this and it would be nice to\nhave one. As far as I can understand this issue, the heaviest update\ntraffic should be on the branches table and should affect all tests.\n\nSecond, I'd really like to bring the small scale test close to a four\nfigure tps if it proves to be a matter of server configuration.\n\nThank you,\nCostin Oproiu\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Feb 2013 23:45:57 +0200",
"msg_from": "Costin Oproiu <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbench intriguing results: better tps figures for larger scale\n factor"
},
{
"msg_contents": "On Wed, Feb 27, 2013 at 3:15 AM, Costin Oproiu <[email protected]> wrote:\n> I took some time to figure out a reasonable tuning for my fresh 9.2.3\n> installation when I've noticed the following:\n>\n> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\n> postgres -i -s 1\n> ...\n> 100000 tuples done.\n> ...\n> vacuum...done.\n> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\n> postgres -c 32 -t 5000\n> ...\n> tps = 245.628075 (including connections establishing)\n> tps = 245.697421 (excluding connections establishing)\n> ...\n> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\n> postgres -i -s 100\n> ...\n> 10000000 tuples done.\n> ...\n> vacuum...done.\n> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\n> postgres -c 32 -t 5000\n> ...\n> tps = 1125.035567 (including connections establishing)\n> tps = 1126.490634 (excluding connections establishing)\n>\n> 32 connections makes a comfortable load for the 8 core 4GB production\n> server, a rather old machine. I kept testing for almost two days with\n> various configuration parameters. In the beginning I was warned to\n> increase the checkpoint_segments, which is now 32. The results were\n> consistent and always showing small scale test (-s 1) at about 245-248\n> tps while big scale test (-s 100) at least 4 and up to 7 times\n> better.\n>\n> According to top, at small scale tests, server processes are doing a\n> lot of UPDATE waiting. A \"select relation::regclass, * from pg_locks\n> where not granted\" showed frequent contention on tellers rows.\n>\n> First, I've got no good explanation for this and it would be nice to\n> have one. As far as I can understand this issue, the heaviest update\n> traffic should be on the branches table and should affect all tests.\n>\n\nIts not very surprising. The smallest table in the test i.e.\npgbench_branches has the number of rows equal to the scale factor.\nWhen you test with scale factor 1 and 32 clients, all those clients\nare contending to update that single row in the table. Since a\ntransaction must wait for the other updating transaction before it can\nupdate the same row, you would get a almost linear behaviour in this\ntest. You may actually want to test with just 1 or 5 or 10 clients and\nmy gut feel is you will still get the same or similar tps.\n\nAs the scale factor is increased, the contention on the smaller tables\nreduces and you will start seeing an increase in the tps as you\nincrease the number of clients. Of course, beyond a point either it\nwill flatten out or even go down.\n\nWhile testing with pgbench, its recommended that the scale factor\nshould be set larger than the number of clients.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nhttp://www.linkedin.com/in/pavandeolasee\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Feb 2013 22:50:33 +0530",
"msg_from": "Pavan Deolasee <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench intriguing results: better tps figures for\n\tlarger scale factor"
},
{
"msg_contents": "On Thu, Feb 28, 2013 at 11:20 AM, Pavan Deolasee\n<[email protected]> wrote:\n> On Wed, Feb 27, 2013 at 3:15 AM, Costin Oproiu <[email protected]> wrote:\n>> I took some time to figure out a reasonable tuning for my fresh 9.2.3\n>> installation when I've noticed the following:\n>>\n>> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\n>> postgres -i -s 1\n>> ...\n>> 100000 tuples done.\n>> ...\n>> vacuum...done.\n>> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\n>> postgres -c 32 -t 5000\n>> ...\n>> tps = 245.628075 (including connections establishing)\n>> tps = 245.697421 (excluding connections establishing)\n>> ...\n>> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\n>> postgres -i -s 100\n>> ...\n>> 10000000 tuples done.\n>> ...\n>> vacuum...done.\n>> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U\n>> postgres -c 32 -t 5000\n>> ...\n>> tps = 1125.035567 (including connections establishing)\n>> tps = 1126.490634 (excluding connections establishing)\n>>\n>> 32 connections makes a comfortable load for the 8 core 4GB production\n>> server, a rather old machine. I kept testing for almost two days with\n>> various configuration parameters. In the beginning I was warned to\n>> increase the checkpoint_segments, which is now 32. The results were\n>> consistent and always showing small scale test (-s 1) at about 245-248\n>> tps while big scale test (-s 100) at least 4 and up to 7 times\n>> better.\n>>\n>> According to top, at small scale tests, server processes are doing a\n>> lot of UPDATE waiting. A \"select relation::regclass, * from pg_locks\n>> where not granted\" showed frequent contention on tellers rows.\n>>\n>> First, I've got no good explanation for this and it would be nice to\n>> have one. As far as I can understand this issue, the heaviest update\n>> traffic should be on the branches table and should affect all tests.\n>>\n>\n> Its not very surprising. The smallest table in the test i.e.\n> pgbench_branches has the number of rows equal to the scale factor.\n> When you test with scale factor 1 and 32 clients, all those clients\n> are contending to update that single row in the table. Since a\n> transaction must wait for the other updating transaction before it can\n> update the same row, you would get a almost linear behaviour in this\n> test. You may actually want to test with just 1 or 5 or 10 clients and\n> my gut feel is you will still get the same or similar tps.\n>\n> As the scale factor is increased, the contention on the smaller tables\n> reduces and you will start seeing an increase in the tps as you\n> increase the number of clients. Of course, beyond a point either it\n> will flatten out or even go down.\n>\n> While testing with pgbench, its recommended that the scale factor\n> should be set larger than the number of clients.\n\nor, you can suppress the high contention updates with the -N switch\n(if you do this make sure to disclose it when giving tps results).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Mar 2013 08:21:41 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench intriguing results: better tps figures for\n\tlarger scale factor"
},
{
"msg_contents": "On 2/26/13 4:45 PM, Costin Oproiu wrote:\n> First, I've got no good explanation for this and it would be nice to\n> have one. As far as I can understand this issue, the heaviest update\n> traffic should be on the branches table and should affect all tests.\n\n From http://www.postgresql.org/docs/current/static/pgbench.html :\n\n\"For the default TPC-B-like test scenario, the initialization scale \nfactor (-s) should be at least as large as the largest number of clients \nyou intend to test (-c); else you'll mostly be measuring update \ncontention. There are only -s rows in the pgbench_branches table, and \nevery transaction wants to update one of them, so -c values in excess of \n-s will undoubtedly result in lots of transactions blocked waiting for \nother transactions.\"\n\nI normally see peak TPS at a scale of around 100 on current generation \nhardware, stuff in the 4 to 24 core range. Nowadays there really is no \nreason to consider running pgbench on a system with a smaller scale than \nthat. I normally get a rough idea of things by running with scales 100, \n250, 500, 1000, 2000.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 04 Mar 2013 17:46:29 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench intriguing results: better tps figures for\n\tlarger scale factor"
}
] |
[
{
"msg_contents": "Dear all,\n\nI have a problem with seqscan I hope you might help me with.\nAttached is the simple script that reproduces a database and results, which I \nhave tested both on 9.0.4 and 9.3-devel with identical results.\n\nI need to have a sort of a time machine, where select statements on tables\ncould be easily replaced to select statements on tables as they were some time in the past, \nincluding all related table. To do so, I used views (see in the script) that UNION\nboth current and archive tables and filter them by a timestamp.\n\nThe problem arises when there are two such views used in a JOIN, and apparently\nthe query planner doesn't look deep enough into views, creating a very slow\nseqscan-based plan. The setup here demonstrates how a join that needs to\nextract a single row, includes a seqscan on the whole table (see 1.Bad plan in\nexplain.txt, and 1000 of rows are being scanned. For the test purposes 1000\nrows is not a high number, but on my system this is several millions, and that\ntakes significant time.\n\nIf I rewrite the query into what I would expect the planner would do for me\n(see 2.Good plan), then (expectably) there are no seqscans. But I'm using an ORM\nwhich can't rewrite joins in such a way automatically, and there are so many of\nthose automated queries that rewriting them by hand is also a rather bad\nalternative. So my question is, is it possible to somehow nudge the planner\ninto the right direction?\n\nThank you in advance!\n\n-- \nSincerely,\n\tDmitry Karasik\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 27 Feb 2013 11:03:52 +0100",
"msg_from": "Dmitry Karasik <[email protected]>",
"msg_from_op": true,
"msg_subject": "seqscan on UNION'ed views"
},
{
"msg_contents": "Dmitry Karasik <[email protected]> writes:\n> I need to have a sort of a time machine, where select statements on tables\n> could be easily replaced to select statements on tables as they were some time in the past, \n> including all related table. To do so, I used views (see in the script) that UNION\n> both current and archive tables and filter them by a timestamp.\n\nIf you use UNION ALL instead of UNION, you should get better results\n(as well as inherently cheaper queries, since no duplicate-elimination\nstep will be needed).\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 Feb 2013 09:21:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seqscan on UNION'ed views"
}
] |
[
{
"msg_contents": "Hi,\n\nDoes any one can tell me why the same query runs against on smaller data is\nslower than bigger table. thanks very much.\n\nI am using PostgreSQL9.1.8.\n\n*t_apps_1 and t_estimate_1 are about 300M respectively, while *_list_1\nabout 10M more or less. According to the result, it need to read a lot of\nblocks(112) from disk.*\nexplain (ANALYZE ON, BUFFERS ON, verbose on\n) SELECT e.t_id, SUM(e.estimate) as est\n FROM\n t_estimate_list_1 l,\n t_apps_list_1 rl,\n t_apps_1 r,\n t_estimate_1 e\n WHERE\n l.id = rl.dsf_id and\n l.date = '2012-07-01' and\n l.fed_id = 202 and\n l.st_id = 143464 and\n rl.cat_id = 12201 and\n l.id = e.list_id and\n rl.id = r.list_id and\n r.t_id = e.t_id\n GROUP BY e.t_id;\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=2529.91..2530.06 rows=15 width=8) (actual\ntime=1041.391..1041.409 rows=97 loops=1)\n Buffers: shared hit=304 read=112\n -> Nested Loop (cost=0.00..2529.84 rows=15 width=8) (actual\ntime=96.752..1041.145 rows=97 loops=1)\n *Buffers: shared hit=304 read=112*\n -> Nested Loop (cost=0.00..312.60 rows=242 width=12) (actual\ntime=62.035..70.239 rows=97 loops=1)\n Buffers: shared hit=18 read=10\n -> Nested Loop (cost=0.00..16.56 rows=1 width=12) (actual\ntime=19.520..19.521 rows=1 loops=1)\n Buffers: shared hit=3 read=6\n -> Index Scan using t_estimate_list_1_unique on\nt_estimate_list_1 l (cost=0.00..8.27 rows=1 width=4) (actual\ntime=11.175..11.176 rows=1 loops=1)\n Index Cond: ((date = '2012-07-01'::date) AND\n(st_id = 143464) AND (fed_id = 202))\n Buffers: shared hit=2 read=4\n -> Index Scan using t_apps_list_1_unique on\nt_apps_list_1 rl (cost=0.00..8.28 rows=1 width=8) (actual\ntime=8.339..8.339 rows=1 loops=1)\n Index Cond: ((dsf_id = l.id) AND (cat_id =\n12201))\n Buffers: shared hit=1 read=2\n -> Index Scan using t_apps_1_pkey on t_apps_1 r\n (cost=0.00..288.56 rows=598 width=8) (actual time=42.513..50.676 rows=97\nloops=1)\n Index Cond: (list_id = rl.id)\n Buffers: shared hit=15 read=4\n -> Index Scan using t_estimate_1_pkey on t_estimate_1 e\n (cost=0.00..9.15 rows=1 width=12) (actual time=10.006..10.007 rows=1\nloops=97)\n Index Cond: ((list_id = l.id) AND (t_id = r.t_id))\n Buffers: shared hit=286 read=102\n* Total runtime: 1041.511 ms*\n(21 rows)\n\n*The table *_30 are about 30 times larger than *_1 in the above SQL.\nAccording to the result, it need to read a lot of blocks(22) from disk. *\nexplain (ANALYZE ON, BUFFERS ON\n) SELECT e.t_id, SUM(e.estimate) as est\n FROM\n t_estimate_list_30 l,\n t_apps_list_30 rl,\n t_apps_30 r,\n t_estimate_30 e\n WHERE\n l.id = rl.dsf_id and\n l.date = '2012-07-01' and\n l.fed_id = 202 and\n l.st_id = 143464 and\n rl.cat_id = 12201 and\n l.id = e.list_id and\n rl.id = r.list_id and\n r.t_id = e.t_id\n GROUP BY e.t_id;\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=3494.89..3495.04 rows=15 width=8) (actual\ntime=160.612..160.632 rows=97 loops=1)\n Buffers: shared hit=493 read=22\n -> Nested Loop (cost=0.00..3494.81 rows=15 width=8) (actual\ntime=151.183..160.533 rows=97 loops=1)\n *Buffers: shared hit=493 read=22*\n -> Nested Loop (cost=0.00..431.42 rows=240 width=12) (actual\ntime=105.810..106.597 rows=97 loops=1)\n Buffers: shared hit=20 read=10\n -> Nested Loop (cost=0.00..16.58 rows=1 width=12) (actual\ntime=52.804..52.805 rows=1 loops=1)\n Buffers: shared hit=4 read=6\n -> Index Scan using t_estimate_list_5_unique on\nt_estimate_list_5 l (cost=0.00..8.27 rows=1 width=4) (actual\ntime=19.846..19.846 rows=1 loops=1)\n Index Cond: ((date = '2012-07-01'::date) AND\n(st_id = 143464) AND (fed_id = 202))\n Buffers: shared hit=2 read=4\n -> Index Scan using t_apps_list_5_unique on\nt_apps_list_5 rl (cost=0.00..8.30 rows=1 width=8) (actual\ntime=32.951..32.952 rows=1 loops=1)\n Index Cond: ((dsf_id = l.id) AND (cat_id =\n12201))\n Buffers: shared hit=2 read=2\n -> Index Scan using t_apps_5_pkey on t_apps_5 r\n (cost=0.00..393.68 rows=1693 width=8) (actual time=53.004..53.755 rows=97\nloops=1)\n Index Cond: (list_id = rl.id)\n Buffers: shared hit=16 read=4\n -> Index Scan using t_estimate_5_pkey on t_estimate_5 e\n (cost=0.00..12.75 rows=1 width=12) (actual time=0.555..0.555 rows=1\nloops=97)\n Index Cond: ((list_id = l.id) AND (t_id = r.t_id))\n Buffers: shared hit=473 read=12\n* Total runtime: 160.729 ms*\n(21 rows)\n\nHi,Does any one can tell me why the same query runs against on smaller data is slower than bigger table. thanks very much.\nI am using PostgreSQL9.1.8.t_apps_1 and t_estimate_1 are about 300M respectively, while *_list_1 about 10M more or less. According to the result, it need to read a lot of blocks(112) from disk.\nexplain (ANALYZE ON, BUFFERS ON, verbose on) SELECT e.t_id, SUM(e.estimate) as est FROM \n t_estimate_list_1 l, t_apps_list_1 rl, t_apps_1 r,\n t_estimate_1 e WHERE l.id = rl.dsf_id and\n l.date = '2012-07-01' and l.fed_id = 202 and l.st_id = 143464 and\n rl.cat_id = 12201 and l.id = e.list_id and\n rl.id = r.list_id and\n r.t_id = e.t_id GROUP BY e.t_id;----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=2529.91..2530.06 rows=15 width=8) (actual time=1041.391..1041.409 rows=97 loops=1) Buffers: shared hit=304 read=112\n -> Nested Loop (cost=0.00..2529.84 rows=15 width=8) (actual time=96.752..1041.145 rows=97 loops=1) Buffers: shared hit=304 read=112\n -> Nested Loop (cost=0.00..312.60 rows=242 width=12) (actual time=62.035..70.239 rows=97 loops=1) Buffers: shared hit=18 read=10\n -> Nested Loop (cost=0.00..16.56 rows=1 width=12) (actual time=19.520..19.521 rows=1 loops=1) Buffers: shared hit=3 read=6\n -> Index Scan using t_estimate_list_1_unique on t_estimate_list_1 l (cost=0.00..8.27 rows=1 width=4) (actual time=11.175..11.176 rows=1 loops=1)\n Index Cond: ((date = '2012-07-01'::date) AND (st_id = 143464) AND (fed_id = 202)) Buffers: shared hit=2 read=4\n -> Index Scan using t_apps_list_1_unique on t_apps_list_1 rl (cost=0.00..8.28 rows=1 width=8) (actual time=8.339..8.339 rows=1 loops=1)\n Index Cond: ((dsf_id = l.id) AND (cat_id = 12201))\n Buffers: shared hit=1 read=2 -> Index Scan using t_apps_1_pkey on t_apps_1 r (cost=0.00..288.56 rows=598 width=8) (actual time=42.513..50.676 rows=97 loops=1)\n Index Cond: (list_id = rl.id) Buffers: shared hit=15 read=4\n -> Index Scan using t_estimate_1_pkey on t_estimate_1 e (cost=0.00..9.15 rows=1 width=12) (actual time=10.006..10.007 rows=1 loops=97) Index Cond: ((list_id = l.id) AND (t_id = r.t_id))\n Buffers: shared hit=286 read=102 Total runtime: 1041.511 ms(21 rows)\nThe table *_30 are about 30 times larger than *_1 in the above SQL. According to the result, it need to read a lot of blocks(22) from disk. \nexplain (ANALYZE ON, BUFFERS ON) SELECT e.t_id, SUM(e.estimate) as est FROM \n t_estimate_list_30 l, t_apps_list_30 rl, t_apps_30 r,\n t_estimate_30 e WHERE l.id = rl.dsf_id and\n l.date = '2012-07-01' and l.fed_id = 202 and l.st_id = 143464 and\n rl.cat_id = 12201 and l.id = e.list_id and\n rl.id = r.list_id and\n r.t_id = e.t_id GROUP BY e.t_id; QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=3494.89..3495.04 rows=15 width=8) (actual time=160.612..160.632 rows=97 loops=1) Buffers: shared hit=493 read=22\n -> Nested Loop (cost=0.00..3494.81 rows=15 width=8) (actual time=151.183..160.533 rows=97 loops=1) Buffers: shared hit=493 read=22\n -> Nested Loop (cost=0.00..431.42 rows=240 width=12) (actual time=105.810..106.597 rows=97 loops=1) Buffers: shared hit=20 read=10\n -> Nested Loop (cost=0.00..16.58 rows=1 width=12) (actual time=52.804..52.805 rows=1 loops=1) Buffers: shared hit=4 read=6\n -> Index Scan using t_estimate_list_5_unique on t_estimate_list_5 l (cost=0.00..8.27 rows=1 width=4) (actual time=19.846..19.846 rows=1 loops=1)\n Index Cond: ((date = '2012-07-01'::date) AND (st_id = 143464) AND (fed_id = 202)) Buffers: shared hit=2 read=4\n -> Index Scan using t_apps_list_5_unique on t_apps_list_5 rl (cost=0.00..8.30 rows=1 width=8) (actual time=32.951..32.952 rows=1 loops=1)\n Index Cond: ((dsf_id = l.id) AND (cat_id = 12201)) Buffers: shared hit=2 read=2\n -> Index Scan using t_apps_5_pkey on t_apps_5 r (cost=0.00..393.68 rows=1693 width=8) (actual time=53.004..53.755 rows=97 loops=1) Index Cond: (list_id = rl.id)\n Buffers: shared hit=16 read=4 -> Index Scan using t_estimate_5_pkey on t_estimate_5 e (cost=0.00..12.75 rows=1 width=12) (actual time=0.555..0.555 rows=1 loops=97)\n Index Cond: ((list_id = l.id) AND (t_id = r.t_id)) Buffers: shared hit=473 read=12\n Total runtime: 160.729 ms(21 rows)",
"msg_date": "Thu, 28 Feb 2013 23:11:16 +0800",
"msg_from": "Ao Jianwang <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELECT is slow on smaller table?"
},
{
"msg_contents": "On 02/28/2013 16:11, Ao Jianwang wrote:\n> Hi,\n>\n> Does any one can tell me why the same query runs against on smaller \n> data is slower than bigger table. thanks very much.\n>\n> I am using PostgreSQL9.1.8.\n>\n> *t_apps_1 and t_estimate_1 are about 300M respectively, while *_list_1 \n> about 10M more or less. According to the result, it need to read a lot \n> of blocks(112) from disk.*\n> explain (ANALYZE ON, BUFFERS ON, verbose on\n> ) SELECT e.t_id, SUM(e.estimate) as est\n> FROM\n> t_estimate_list_1 l,\n> t_apps_list_1 rl,\n> t_apps_1 r,\n> t_estimate_1 e\n> WHERE\n> l.id <http://l.id> = rl.dsf_id and\n> l.date = '2012-07-01' and\n> l.fed_id = 202 and\n> l.st_id = 143464 and\n> rl.cat_id = 12201 and\n> l.id <http://l.id> = e.list_id and\n> rl.id <http://rl.id> = r.list_id and\n> r.t_id = e.t_id\n> GROUP BY e.t_id;\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=2529.91..2530.06 rows=15 width=8) (actual \n> time=1041.391..1041.409 rows=97 loops=1)\n> Buffers: shared hit=304 read=112\n> -> Nested Loop (cost=0.00..2529.84 rows=15 width=8) (actual \n> time=96.752..1041.145 rows=97 loops=1)\n> *Buffers: shared hit=304 read=112*\n> -> Nested Loop (cost=0.00..312.60 rows=242 width=12) \n> (actual time=62.035..70.239 rows=97 loops=1)\n> Buffers: shared hit=18 read=10\n> -> Nested Loop (cost=0.00..16.56 rows=1 width=12) \n> (actual time=19.520..19.521 rows=1 loops=1)\n> Buffers: shared hit=3 read=6\n> -> Index Scan using t_estimate_list_1_unique on t_estimate_list_1 l \n> (cost=0.00..8.27 rows=1 width=4) (actual time=11.175..11.176 rows=1 \n> loops=1)\n> Index Cond: ((date = '2012-07-01'::date) AND (st_id = 143464) AND \n> (fed_id = 202))\n> Buffers: shared hit=2 read=4\n> -> Index Scan using t_apps_list_1_unique on t_apps_list_1 rl \n> (cost=0.00..8.28 rows=1 width=8) (actual time=8.339..8.339 rows=1 \n> loops=1)\n> Index Cond: ((dsf_id = l.id <http://l.id>) AND (cat_id = 12201))\n> Buffers: shared hit=1 read=2\n> -> Index Scan using t_apps_1_pkey on t_apps_1 r \n> (cost=0.00..288.56 rows=598 width=8) (actual time=42.513..50.676 \n> rows=97 loops=1)\n> Index Cond: (list_id = rl.id <http://rl.id>)\n> Buffers: shared hit=15 read=4\n> -> Index Scan using t_estimate_1_pkey on t_estimate_1 e \n> (cost=0.00..9.15 rows=1 width=12) (actual time=10.006..10.007 rows=1 \n> loops=97)\n> Index Cond: ((list_id = l.id <http://l.id>) AND (t_id = \n> r.t_id))\n> Buffers: shared hit=286 read=102\n> * Total runtime: 1041.511 ms*\n> (21 rows)\n>\n> *The table *_30 are about 30 times larger than *_1 in the above SQL. \n> According to the result, it need to read a lot of blocks(22) from disk. *\n> explain (ANALYZE ON, BUFFERS ON\n> ) SELECT e.t_id, SUM(e.estimate) as est\n> FROM\n> t_estimate_list_30 l,\n> t_apps_list_30 rl,\n> t_apps_30 r,\n> t_estimate_30 e\n> WHERE\n> l.id <http://l.id> = rl.dsf_id and\n> l.date = '2012-07-01' and\n> l.fed_id = 202 and\n> l.st_id = 143464 and\n> rl.cat_id = 12201 and\n> l.id <http://l.id> = e.list_id and\n> rl.id <http://rl.id> = r.list_id and\n> r.t_id = e.t_id\n> GROUP BY e.t_id;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=3494.89..3495.04 rows=15 width=8) (actual \n> time=160.612..160.632 rows=97 loops=1)\n> Buffers: shared hit=493 read=22\n> -> Nested Loop (cost=0.00..3494.81 rows=15 width=8) (actual \n> time=151.183..160.533 rows=97 loops=1)\n> *Buffers: shared hit=493 read=22*\n> -> Nested Loop (cost=0.00..431.42 rows=240 width=12) \n> (actual time=105.810..106.597 rows=97 loops=1)\n> Buffers: shared hit=20 read=10\n> -> Nested Loop (cost=0.00..16.58 rows=1 width=12) \n> (actual time=52.804..52.805 rows=1 loops=1)\n> Buffers: shared hit=4 read=6\n> -> Index Scan using t_estimate_list_5_unique on t_estimate_list_5 l \n> (cost=0.00..8.27 rows=1 width=4) (actual time=19.846..19.846 rows=1 \n> loops=1)\n> Index Cond: ((date = '2012-07-01'::date) AND (st_id = 143464) AND \n> (fed_id = 202))\n> Buffers: shared hit=2 read=4\n> -> Index Scan using t_apps_list_5_unique on t_apps_list_5 rl \n> (cost=0.00..8.30 rows=1 width=8) (actual time=32.951..32.952 rows=1 \n> loops=1)\n> Index Cond: ((dsf_id = l.id <http://l.id>) AND (cat_id = 12201))\n> Buffers: shared hit=2 read=2\n> -> Index Scan using t_apps_5_pkey on t_apps_5 r \n> (cost=0.00..393.68 rows=1693 width=8) (actual time=53.004..53.755 \n> rows=97 loops=1)\n> Index Cond: (list_id = rl.id <http://rl.id>)\n> Buffers: shared hit=16 read=4\n> -> Index Scan using t_estimate_5_pkey on t_estimate_5 e \n> (cost=0.00..12.75 rows=1 width=12) (actual time=0.555..0.555 rows=1 \n> loops=97)\n> Index Cond: ((list_id = l.id <http://l.id>) AND (t_id = \n> r.t_id))\n> Buffers: shared hit=473 read=12\n> * Total runtime: 160.729 ms*\n> (21 rows)\n>\n>\n\nProbably that somes pages have to be loaded in memory ...\nIt should be faster if you re-run the same query just after\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n\n\n\n\n\nOn 02/28/2013 16:11, Ao Jianwang wrote:\n\nHi,\n\n\n\nDoes any one can tell\n me why the same query runs against on smaller data is slower\n than bigger table. thanks very much.\n\n\nI am using\n PostgreSQL9.1.8.\n\n\nt_apps_1\n and t_estimate_1 are about 300M respectively,\n while *_list_1 about 10M more or less. According to the\n result, it need to read a lot of blocks(112) from disk.\nexplain (ANALYZE ON,\n BUFFERS ON, verbose on\n) SELECT e.t_id,\n SUM(e.estimate) as est\n FROM \n \n t_estimate_list_1 l, \n \n t_apps_list_1 rl, \n \n t_apps_1 r,\n \n t_estimate_1 e\n WHERE \n l.id\n = rl.dsf_id and\n l.date\n = '2012-07-01' and\n \n l.fed_id = 202 and\n l.st_id\n = 143464 and\n \n rl.cat_id = 12201 and\n l.id\n = e.list_id and\n rl.id\n = r.list_id and\n r.t_id\n = e.t_id\n GROUP BY\n e.t_id;\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate\n (cost=2529.91..2530.06 rows=15 width=8) (actual\n time=1041.391..1041.409 rows=97 loops=1)\n Buffers: shared\n hit=304 read=112\n -> Nested Loop\n (cost=0.00..2529.84 rows=15 width=8) (actual\n time=96.752..1041.145 rows=97 loops=1)\n Buffers:\n shared hit=304 read=112\n -> Nested\n Loop (cost=0.00..312.60 rows=242 width=12) (actual\n time=62.035..70.239 rows=97 loops=1)\n Buffers:\n shared hit=18 read=10\n ->\n Nested Loop (cost=0.00..16.56 rows=1 width=12) (actual\n time=19.520..19.521 rows=1 loops=1)\n \n Buffers: shared hit=3 read=6\n \n -> Index Scan using t_estimate_list_1_unique on\n t_estimate_list_1 l (cost=0.00..8.27 rows=1 width=4)\n (actual time=11.175..11.176 rows=1 loops=1)\n \n Index Cond: ((date = '2012-07-01'::date) AND (st_id =\n 143464) AND (fed_id = 202))\n \n Buffers: shared hit=2 read=4\n \n -> Index Scan using t_apps_list_1_unique on\n t_apps_list_1 rl (cost=0.00..8.28 rows=1 width=8) (actual\n time=8.339..8.339 rows=1 loops=1)\n\n \n Index Cond: ((dsf_id = l.id) AND (cat_id =\n 12201))\n \n Buffers: shared hit=1 read=2\n ->\n Index Scan using t_apps_1_pkey on t_apps_1 r\n (cost=0.00..288.56 rows=598 width=8) (actual\n time=42.513..50.676 rows=97 loops=1)\n \n Index Cond: (list_id = rl.id)\n \n Buffers: shared hit=15 read=4\n -> Index\n Scan using t_estimate_1_pkey on t_estimate_1 e\n (cost=0.00..9.15 rows=1 width=12) (actual\n time=10.006..10.007 rows=1 loops=97)\n Index\n Cond: ((list_id = l.id) AND (t_id =\n r.t_id))\n Buffers:\n shared hit=286 read=102\n Total runtime:\n 1041.511 ms\n(21 rows)\n\n\nThe\n table *_30 are about 30 times larger than *_1 in the above\n SQL. According to the result, it need to read a lot of\n blocks(22) from disk. \nexplain (ANALYZE ON,\n BUFFERS ON\n) SELECT e.t_id,\n SUM(e.estimate) as est\n FROM \n \n t_estimate_list_30 l, \n \n t_apps_list_30 rl, \n \n t_apps_30 r,\n \n t_estimate_30 e\n WHERE \n l.id\n = rl.dsf_id and\n l.date\n = '2012-07-01' and\n \n l.fed_id = 202 and\n l.st_id\n = 143464 and\n \n rl.cat_id = 12201 and\n l.id\n = e.list_id and\n rl.id\n = r.list_id and\n r.t_id\n = e.t_id\n GROUP BY\n e.t_id;\n \n \n QUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate\n (cost=3494.89..3495.04 rows=15 width=8) (actual\n time=160.612..160.632 rows=97 loops=1)\n Buffers: shared\n hit=493 read=22\n -> Nested Loop\n (cost=0.00..3494.81 rows=15 width=8) (actual\n time=151.183..160.533 rows=97 loops=1)\n Buffers:\n shared hit=493 read=22\n -> Nested\n Loop (cost=0.00..431.42 rows=240 width=12) (actual\n time=105.810..106.597 rows=97 loops=1)\n Buffers:\n shared hit=20 read=10\n ->\n Nested Loop (cost=0.00..16.58 rows=1 width=12) (actual\n time=52.804..52.805 rows=1 loops=1)\n \n Buffers: shared hit=4 read=6\n \n -> Index Scan using t_estimate_list_5_unique on\n t_estimate_list_5 l (cost=0.00..8.27 rows=1 width=4)\n (actual time=19.846..19.846 rows=1 loops=1)\n \n Index Cond: ((date = '2012-07-01'::date) AND (st_id =\n 143464) AND (fed_id = 202))\n \n Buffers: shared hit=2 read=4\n \n -> Index Scan using t_apps_list_5_unique on\n t_apps_list_5 rl (cost=0.00..8.30 rows=1 width=8) (actual\n time=32.951..32.952 rows=1 loops=1)\n\n \n Index Cond: ((dsf_id = l.id) AND (cat_id =\n 12201))\n \n Buffers: shared hit=2 read=2\n ->\n Index Scan using t_apps_5_pkey on t_apps_5 r\n (cost=0.00..393.68 rows=1693 width=8) (actual\n time=53.004..53.755 rows=97 loops=1)\n \n Index Cond: (list_id = rl.id)\n \n Buffers: shared hit=16 read=4\n -> Index\n Scan using t_estimate_5_pkey on t_estimate_5 e\n (cost=0.00..12.75 rows=1 width=12) (actual\n time=0.555..0.555 rows=1 loops=97)\n Index\n Cond: ((list_id = l.id) AND (t_id =\n r.t_id))\n Buffers:\n shared hit=473 read=12\n Total runtime:\n 160.729 ms\n(21 rows)\n\n\n\n\n\n\n Probably that somes pages have to be loaded in memory ...\n It should be faster if you re-run the same query just after\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Thu, 28 Feb 2013 16:19:39 +0100",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELECT is slow on smaller table?"
},
{
"msg_contents": "Thanks Julien very much.\nTwo strange behaviors I found:\n1) Even I restart the machine and restart the PostgreSQL, then I execute\nthe query, i still see the shared_hit. It seems when start PG, i will\nautomatically load the data in the cache of the last time?\n2) After I rerun the query, the time for the smaller data is about 19ms,\nwhile the time for the bigger data is about 17ms. And the trend is the time\nfor bigger data is always faster than the smaller data for about 1 to 2 ms\n\nAny suggestions? thanks very much.\n\n\nOn Thu, Feb 28, 2013 at 11:19 PM, Julien Cigar <[email protected]> wrote:\n\n> On 02/28/2013 16:11, Ao Jianwang wrote:\n>\n> Hi,\n>\n> Does any one can tell me why the same query runs against on smaller data\n> is slower than bigger table. thanks very much.\n>\n> I am using PostgreSQL9.1.8.\n>\n> *t_apps_1 and t_estimate_1 are about 300M respectively, while *_list_1\n> about 10M more or less. According to the result, it need to read a lot of\n> blocks(112) from disk.*\n> explain (ANALYZE ON, BUFFERS ON, verbose on\n> ) SELECT e.t_id, SUM(e.estimate) as est\n> FROM\n> t_estimate_list_1 l,\n> t_apps_list_1 rl,\n> t_apps_1 r,\n> t_estimate_1 e\n> WHERE\n> l.id = rl.dsf_id and\n> l.date = '2012-07-01' and\n> l.fed_id = 202 and\n> l.st_id = 143464 and\n> rl.cat_id = 12201 and\n> l.id = e.list_id and\n> rl.id = r.list_id and\n> r.t_id = e.t_id\n> GROUP BY e.t_id;\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=2529.91..2530.06 rows=15 width=8) (actual\n> time=1041.391..1041.409 rows=97 loops=1)\n> Buffers: shared hit=304 read=112\n> -> Nested Loop (cost=0.00..2529.84 rows=15 width=8) (actual\n> time=96.752..1041.145 rows=97 loops=1)\n> *Buffers: shared hit=304 read=112*\n> -> Nested Loop (cost=0.00..312.60 rows=242 width=12) (actual\n> time=62.035..70.239 rows=97 loops=1)\n> Buffers: shared hit=18 read=10\n> -> Nested Loop (cost=0.00..16.56 rows=1 width=12) (actual\n> time=19.520..19.521 rows=1 loops=1)\n> Buffers: shared hit=3 read=6\n> -> Index Scan using t_estimate_list_1_unique on\n> t_estimate_list_1 l (cost=0.00..8.27 rows=1 width=4) (actual\n> time=11.175..11.176 rows=1 loops=1)\n> Index Cond: ((date = '2012-07-01'::date) AND\n> (st_id = 143464) AND (fed_id = 202))\n> Buffers: shared hit=2 read=4\n> -> Index Scan using t_apps_list_1_unique on\n> t_apps_list_1 rl (cost=0.00..8.28 rows=1 width=8) (actual\n> time=8.339..8.339 rows=1 loops=1)\n> Index Cond: ((dsf_id = l.id) AND (cat_id =\n> 12201))\n> Buffers: shared hit=1 read=2\n> -> Index Scan using t_apps_1_pkey on t_apps_1 r\n> (cost=0.00..288.56 rows=598 width=8) (actual time=42.513..50.676 rows=97\n> loops=1)\n> Index Cond: (list_id = rl.id)\n> Buffers: shared hit=15 read=4\n> -> Index Scan using t_estimate_1_pkey on t_estimate_1 e\n> (cost=0.00..9.15 rows=1 width=12) (actual time=10.006..10.007 rows=1\n> loops=97)\n> Index Cond: ((list_id = l.id) AND (t_id = r.t_id))\n> Buffers: shared hit=286 read=102\n> * Total runtime: 1041.511 ms*\n> (21 rows)\n>\n> *The table *_30 are about 30 times larger than *_1 in the above SQL.\n> According to the result, it need to read a lot of blocks(22) from disk. *\n> explain (ANALYZE ON, BUFFERS ON\n> ) SELECT e.t_id, SUM(e.estimate) as est\n> FROM\n> t_estimate_list_30 l,\n> t_apps_list_30 rl,\n> t_apps_30 r,\n> t_estimate_30 e\n> WHERE\n> l.id = rl.dsf_id and\n> l.date = '2012-07-01' and\n> l.fed_id = 202 and\n> l.st_id = 143464 and\n> rl.cat_id = 12201 and\n> l.id = e.list_id and\n> rl.id = r.list_id and\n> r.t_id = e.t_id\n> GROUP BY e.t_id;\n>\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=3494.89..3495.04 rows=15 width=8) (actual\n> time=160.612..160.632 rows=97 loops=1)\n> Buffers: shared hit=493 read=22\n> -> Nested Loop (cost=0.00..3494.81 rows=15 width=8) (actual\n> time=151.183..160.533 rows=97 loops=1)\n> *Buffers: shared hit=493 read=22*\n> -> Nested Loop (cost=0.00..431.42 rows=240 width=12) (actual\n> time=105.810..106.597 rows=97 loops=1)\n> Buffers: shared hit=20 read=10\n> -> Nested Loop (cost=0.00..16.58 rows=1 width=12) (actual\n> time=52.804..52.805 rows=1 loops=1)\n> Buffers: shared hit=4 read=6\n> -> Index Scan using t_estimate_list_5_unique on\n> t_estimate_list_5 l (cost=0.00..8.27 rows=1 width=4) (actual\n> time=19.846..19.846 rows=1 loops=1)\n> Index Cond: ((date = '2012-07-01'::date) AND\n> (st_id = 143464) AND (fed_id = 202))\n> Buffers: shared hit=2 read=4\n> -> Index Scan using t_apps_list_5_unique on\n> t_apps_list_5 rl (cost=0.00..8.30 rows=1 width=8) (actual\n> time=32.951..32.952 rows=1 loops=1)\n> Index Cond: ((dsf_id = l.id) AND (cat_id =\n> 12201))\n> Buffers: shared hit=2 read=2\n> -> Index Scan using t_apps_5_pkey on t_apps_5 r\n> (cost=0.00..393.68 rows=1693 width=8) (actual time=53.004..53.755 rows=97\n> loops=1)\n> Index Cond: (list_id = rl.id)\n> Buffers: shared hit=16 read=4\n> -> Index Scan using t_estimate_5_pkey on t_estimate_5 e\n> (cost=0.00..12.75 rows=1 width=12) (actual time=0.555..0.555 rows=1\n> loops=97)\n> Index Cond: ((list_id = l.id) AND (t_id = r.t_id))\n> Buffers: shared hit=473 read=12\n> * Total runtime: 160.729 ms*\n> (21 rows)\n>\n>\n>\n> Probably that somes pages have to be loaded in memory ...\n> It should be faster if you re-run the same query just after\n>\n> --\n> No trees were killed in the creation of this message.\n> However, many electrons were terribly inconvenienced.\n>\n>\n\nThanks Julien very much.Two strange behaviors I found:1) Even I restart the machine and restart the PostgreSQL, then I execute the query, i still see the shared_hit. It seems when start PG, i will automatically load the data in the cache of the last time?\n2) After I rerun the query, the time for the smaller data is about 19ms, while the time for the bigger data is about 17ms. And the trend is the time for bigger data is always faster than the smaller data for about 1 to 2 ms\nAny suggestions? thanks very much. On Thu, Feb 28, 2013 at 11:19 PM, Julien Cigar <[email protected]> wrote:\n\n\nOn 02/28/2013 16:11, Ao Jianwang wrote:\n\nHi,\n\n\n\nDoes any one can tell\n me why the same query runs against on smaller data is slower\n than bigger table. thanks very much.\n\n\nI am using\n PostgreSQL9.1.8.\n\n\nt_apps_1\n and t_estimate_1 are about 300M respectively,\n while *_list_1 about 10M more or less. According to the\n result, it need to read a lot of blocks(112) from disk.\nexplain (ANALYZE ON,\n BUFFERS ON, verbose on\n) SELECT e.t_id,\n SUM(e.estimate) as est\n FROM \n \n t_estimate_list_1 l, \n \n t_apps_list_1 rl, \n \n t_apps_1 r,\n \n t_estimate_1 e\n WHERE \n l.id\n = rl.dsf_id and\n l.date\n = '2012-07-01' and\n \n l.fed_id = 202 and\n l.st_id\n = 143464 and\n \n rl.cat_id = 12201 and\n l.id\n = e.list_id and\n rl.id\n = r.list_id and\n r.t_id\n = e.t_id\n GROUP BY\n e.t_id;\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate\n (cost=2529.91..2530.06 rows=15 width=8) (actual\n time=1041.391..1041.409 rows=97 loops=1)\n Buffers: shared\n hit=304 read=112\n -> Nested Loop\n (cost=0.00..2529.84 rows=15 width=8) (actual\n time=96.752..1041.145 rows=97 loops=1)\n Buffers:\n shared hit=304 read=112\n -> Nested\n Loop (cost=0.00..312.60 rows=242 width=12) (actual\n time=62.035..70.239 rows=97 loops=1)\n Buffers:\n shared hit=18 read=10\n ->\n Nested Loop (cost=0.00..16.56 rows=1 width=12) (actual\n time=19.520..19.521 rows=1 loops=1)\n \n Buffers: shared hit=3 read=6\n \n -> Index Scan using t_estimate_list_1_unique on\n t_estimate_list_1 l (cost=0.00..8.27 rows=1 width=4)\n (actual time=11.175..11.176 rows=1 loops=1)\n \n Index Cond: ((date = '2012-07-01'::date) AND (st_id =\n 143464) AND (fed_id = 202))\n \n Buffers: shared hit=2 read=4\n \n -> Index Scan using t_apps_list_1_unique on\n t_apps_list_1 rl (cost=0.00..8.28 rows=1 width=8) (actual\n time=8.339..8.339 rows=1 loops=1)\n\n \n Index Cond: ((dsf_id = l.id) AND (cat_id =\n 12201))\n \n Buffers: shared hit=1 read=2\n ->\n Index Scan using t_apps_1_pkey on t_apps_1 r\n (cost=0.00..288.56 rows=598 width=8) (actual\n time=42.513..50.676 rows=97 loops=1)\n \n Index Cond: (list_id = rl.id)\n \n Buffers: shared hit=15 read=4\n -> Index\n Scan using t_estimate_1_pkey on t_estimate_1 e\n (cost=0.00..9.15 rows=1 width=12) (actual\n time=10.006..10.007 rows=1 loops=97)\n Index\n Cond: ((list_id = l.id) AND (t_id =\n r.t_id))\n Buffers:\n shared hit=286 read=102\n Total runtime:\n 1041.511 ms\n(21 rows)\n\n\nThe\n table *_30 are about 30 times larger than *_1 in the above\n SQL. According to the result, it need to read a lot of\n blocks(22) from disk. \nexplain (ANALYZE ON,\n BUFFERS ON\n) SELECT e.t_id,\n SUM(e.estimate) as est\n FROM \n \n t_estimate_list_30 l, \n \n t_apps_list_30 rl, \n \n t_apps_30 r,\n \n t_estimate_30 e\n WHERE \n l.id\n = rl.dsf_id and\n l.date\n = '2012-07-01' and\n \n l.fed_id = 202 and\n l.st_id\n = 143464 and\n \n rl.cat_id = 12201 and\n l.id\n = e.list_id and\n rl.id\n = r.list_id and\n r.t_id\n = e.t_id\n GROUP BY\n e.t_id;\n \n \n QUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate\n (cost=3494.89..3495.04 rows=15 width=8) (actual\n time=160.612..160.632 rows=97 loops=1)\n Buffers: shared\n hit=493 read=22\n -> Nested Loop\n (cost=0.00..3494.81 rows=15 width=8) (actual\n time=151.183..160.533 rows=97 loops=1)\n Buffers:\n shared hit=493 read=22\n -> Nested\n Loop (cost=0.00..431.42 rows=240 width=12) (actual\n time=105.810..106.597 rows=97 loops=1)\n Buffers:\n shared hit=20 read=10\n ->\n Nested Loop (cost=0.00..16.58 rows=1 width=12) (actual\n time=52.804..52.805 rows=1 loops=1)\n \n Buffers: shared hit=4 read=6\n \n -> Index Scan using t_estimate_list_5_unique on\n t_estimate_list_5 l (cost=0.00..8.27 rows=1 width=4)\n (actual time=19.846..19.846 rows=1 loops=1)\n \n Index Cond: ((date = '2012-07-01'::date) AND (st_id =\n 143464) AND (fed_id = 202))\n \n Buffers: shared hit=2 read=4\n \n -> Index Scan using t_apps_list_5_unique on\n t_apps_list_5 rl (cost=0.00..8.30 rows=1 width=8) (actual\n time=32.951..32.952 rows=1 loops=1)\n\n \n Index Cond: ((dsf_id = l.id) AND (cat_id =\n 12201))\n \n Buffers: shared hit=2 read=2\n ->\n Index Scan using t_apps_5_pkey on t_apps_5 r\n (cost=0.00..393.68 rows=1693 width=8) (actual\n time=53.004..53.755 rows=97 loops=1)\n \n Index Cond: (list_id = rl.id)\n \n Buffers: shared hit=16 read=4\n -> Index\n Scan using t_estimate_5_pkey on t_estimate_5 e\n (cost=0.00..12.75 rows=1 width=12) (actual\n time=0.555..0.555 rows=1 loops=97)\n Index\n Cond: ((list_id = l.id) AND (t_id =\n r.t_id))\n Buffers:\n shared hit=473 read=12\n Total runtime:\n 160.729 ms\n(21 rows)\n\n\n\n\n\n\n Probably that somes pages have to be loaded in memory ...\n It should be faster if you re-run the same query just after\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.",
"msg_date": "Fri, 1 Mar 2013 08:30:09 +0800",
"msg_from": "Ao Jianwang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELECT is slow on smaller table?"
}
] |
[
{
"msg_contents": "I am currently looking at a performance issue where a subquery is \nexecuted even though I cannot see the result ever being asked for or \nused. Not even google has helped me find any information about this one. \nI have created a simple test case to demonstrate this.\n\nCREATE TABLE test1 (t1 INT);\nINSERT INTO test1 VALUES(1);\nCREATE TABLE test2 (t2 INT);\nINSERT INTO test2 VALUES(1);\nCREATE TABLE test3 (t1 INT, cnt INT);\nINSERT INTO test3 VALUES(3, 3);\nCREATE VIEW tv AS SELECT t1, (SELECT COUNT(*) FROM test2 WHERE t1=t2) AS \nCNT FROM test1;\nCREATE VIEW tv2 AS SELECT t1, (SELECT COUNT(*) FROM test2 WHERE t1=t2) \nAS CNT FROM test1 UNION ALL SELECT t1, cnt FROM test3;\nEXPLAIN ANALYZE SELECT t1 FROM tv;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Seq Scan on test1 (cost=0.00..34.00 rows=2400 width=4) (actual \ntime=0.005..0.006 rows=1 loops=1)\n Total runtime: 0.033 ms\n(2 rows)\nEXPLAIN ANALYZE SELECT t1 FROM tv2;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Subquery Scan on tv2 (cost=0.00..96252.20 rows=4540 width=4) (actual \ntime=0.026..0.042 rows=2 loops=1)\n -> Append (cost=0.00..96206.80 rows=4540 width=6) (actual \ntime=0.024..0.036 rows=2 loops=1)\n -> Seq Scan on test1 (cost=0.00..96130.00 rows=2400 width=4) \n(actual time=0.023..0.024 rows=1 loops=1)\n SubPlan 1\n -> Aggregate (cost=40.03..40.04 rows=1 width=0) \n(actual time=0.013..0.014 rows=1 loops=1)\n -> Seq Scan on test2 (cost=0.00..40.00 rows=12 \nwidth=0) (actual time=0.005..0.006 rows=1 loops=1)\n Filter: (test1.t1 = t2)\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..52.80 rows=2140 \nwidth=8) (actual time=0.005..0.007 rows=1 loops=1)\n -> Seq Scan on test3 (cost=0.00..31.40 rows=2140 \nwidth=8) (actual time=0.002..0.004 rows=1 loops=1)\n Total runtime: 0.089 ms\n(10 rows)\n\nAs can be seen in the last explain it does a sequential scan of table \ntest2 even though this is not needed. It will perform the scan once for \neach row in table test1. No scan of test2 is done if there is not a \nunion in the view. I cannot see any reason for this happening and am \nguessing that the query planner does not know that it can get rid of the \nsubquery. Is there anything I can do to rid of the subquery myself?\n\nThe postgresql version I am using is:\nopsspace=# select version();\nversion\n-------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.3 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) \n4.7.2 20121109 (Red Hat 4.7.2-8), 64-bit\n(1 row)\n\n\nMany thanks\nMarkus\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 01 Mar 2013 07:32:01 +0100",
"msg_from": "=?ISO-8859-1?Q?Markus_Herv=E9n?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Processing of subqueries in union"
}
] |
[
{
"msg_contents": "Recently I moved my ~600G / ~15K TPS database from a\n48 [email protected] server with 512GB RAM on 15K RPM disk\nto a newer server with\n64 [email protected] server with 1T of RAM on 15K RPM disks\n\nThe move was from v9.1.4 to v9.1.8 (eventually also tested with v9.1.4 on\nthe new hardware) and was done via base backup followed by slave promotion.\nAll postgres configurations were matched exactly as were system and kernel\nparameters.\n\nOn the first day that this server saw production load levels it absolutely\nfell on its face. We ran an exhaustive battery of tests including failing\nover to the new (hardware matched) slave only to find the problem happening\nthere also.\n\nAfter several engineers all confirmed that every postgres and system\nsetting matched, we eventually migrated back onto the original hardware\nusing exactly the same methods and settings that had been used while the\ndata was on the new hardware. As soon as we brought the DB live on the\nolder (supposedly slower) hardware, everything started running smoothly\nagain.\n\nAs far as we were able to gather in the frantic moments of downtime,\nhundreds of queries were hanging up while trying to COMMIT. This in turn\ncaused new queries backup as they waited for locks and so on.\n\nPrior to failing back to the original hardware, we found interesting posts\nabout people having problems similar to ours due to NUMA and several\nsuggested that they had solved their problem by setting\nvm.zone_reclaim_mode = 0\n\nUnfortunately we experienced the exact same problems even after turning off\nthe zone_reclaim_mode. We did extensive testing of the i/o on the new\nhardware (both data and log arrays) before it was put into service and\nhave done even more comprehensive testing since it came out of service.\n The short version is that the disks on the new hardware are faster than\ndisks on the old server. In one test run we even set the server to write\nWALs to shared memory instead of to the log LV just to help rule out i/o\nproblems and only saw a marginal improvement in overall TPS numbers.\n\nAt this point we are extremely confident that if we have a configuration\nproblem, it is not with any of the usual postgresql.conf/sysctl.conf\nsuspects. We are pretty sure that the problem is being caused by the\nhardware in some way but that it is not the result of a hardware failure\n(e.g. degraded array, raid card self tests or what have you).\n\nGiven that we're dealing with new hardware and the fact that this still\nacts a lot like a NUMA issue, are there other settings we should be\nadjusting to deal with possible performance problems associated with NUMA?\n\nDoes this sound like something else entirely?\n\nAny thoughts appreciated.\n\nthanks,\nSteve\n\nRecently I moved my ~600G / ~15K TPS database from a 48 [email protected] server with 512GB RAM on 15K RPM diskto a newer server with 64 [email protected] server with 1T of RAM on 15K RPM disks\nThe move was from v9.1.4 to v9.1.8 (eventually also tested with v9.1.4 on the new hardware) and was done via base backup followed by slave promotion.All postgres configurations were matched exactly as were system and kernel parameters.\nOn the first day that this server saw production load levels it absolutely fell on its face. We ran an exhaustive battery of tests including failing over to the new (hardware matched) slave only to find the problem happening there also. \nAfter several engineers all confirmed that every postgres and system setting matched, we eventually migrated back onto the original hardware using exactly the same methods and settings that had been used while the data was on the new hardware. As soon as we brought the DB live on the older (supposedly slower) hardware, everything started running smoothly again. \nAs far as we were able to gather in the frantic moments of downtime, hundreds of queries were hanging up while trying to COMMIT. This in turn caused new queries backup as they waited for locks and so on. \nPrior to failing back to the original hardware, we found interesting posts about people having problems similar to ours due to NUMA and several suggested that they had solved their problem by setting vm.zone_reclaim_mode = 0 \nUnfortunately we experienced the exact same problems even after turning off the zone_reclaim_mode. We did extensive testing of the i/o on the new hardware (both data and log arrays) before it was put into service and have done even more comprehensive testing since it came out of service. The short version is that the disks on the new hardware are faster than disks on the old server. In one test run we even set the server to write WALs to shared memory instead of to the log LV just to help rule out i/o problems and only saw a marginal improvement in overall TPS numbers.\nAt this point we are extremely confident that if we have a configuration problem, it is not with any of the usual postgresql.conf/sysctl.conf suspects. We are pretty sure that the problem is being caused by the hardware in some way but that it is not the result of a hardware failure (e.g. degraded array, raid card self tests or what have you).\nGiven that we're dealing with new hardware and the fact that this still acts a lot like a NUMA issue, are there other settings we should be adjusting to deal with possible performance problems associated with NUMA?\nDoes this sound like something else entirely?Any thoughts appreciated.thanks,Steve",
"msg_date": "Fri, 1 Mar 2013 02:52:02 -0700",
"msg_from": "Steven Crandell <[email protected]>",
"msg_from_op": true,
"msg_subject": "hardware upgrade, performance degrade?"
},
{
"msg_contents": "Hi Steven,\n\nOn Fri, Mar 1, 2013 at 10:52 AM, Steven Crandell\n<[email protected]> wrote:\n> Given that we're dealing with new hardware and the fact that this still acts\n> a lot like a NUMA issue, are there other settings we should be adjusting to\n> deal with possible performance problems associated with NUMA?\n\nOn my kernel related thread list, I have:\nhttp://www.postgresql.org/message-id/CAOR=d=2tjWoxpQrUHpJK1R+BtEvBv4buiVtX4Qf6we=MHmghxw@mail.gmail.com\n(you haven't mentioned if the kernel version is identical for both\nservers)\nhttp://www.postgresql.org/message-id/[email protected]\nhttp://www.postgresql.org/message-id/[email protected]\nhttp://www.postgresql.org/message-id/CAMAsR=5F45+kj+hw9q+zE7zo=Qc0yBEB1sLXCF0QL+dWt_7KqQ@mail.gmail.com\n\nMight be of some interest to you.\n\n-- \nGuillaume\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 1 Mar 2013 12:13:08 +0100",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "On Fri, Mar 1, 2013 at 1:52 AM, Steven Crandell\n<[email protected]>wrote:\n\n> Recently I moved my ~600G / ~15K TPS database from a\n> 48 [email protected] server with 512GB RAM on 15K RPM disk\n> to a newer server with\n> 64 [email protected] server with 1T of RAM on 15K RPM disks\n>\n> The move was from v9.1.4 to v9.1.8 (eventually also tested with v9.1.4 on\n> the new hardware) and was done via base backup followed by slave promotion.\n> All postgres configurations were matched exactly as were system and kernel\n> parameters.\n>\n> On the first day that this server saw production load levels it absolutely\n> fell on its face. We ran an exhaustive battery of tests including failing\n> over to the new (hardware matched) slave only to find the problem happening\n> there also.\n>\n> After several engineers all confirmed that every postgres and system\n> setting matched, we eventually migrated back onto the original hardware\n> using exactly the same methods and settings that had been used while the\n> data was on the new hardware. As soon as we brought the DB live on the\n> older (supposedly slower) hardware, everything started running smoothly\n> again.\n>\n> As far as we were able to gather in the frantic moments of downtime,\n> hundreds of queries were hanging up while trying to COMMIT. This in turn\n> caused new queries backup as they waited for locks and so on.\n>\n> Prior to failing back to the original hardware, we found interesting posts\n> about people having problems similar to ours due to NUMA and several\n> suggested that they had solved their problem by setting\n> vm.zone_reclaim_mode = 0\n>\n> Unfortunately we experienced the exact same problems even after turning\n> off the zone_reclaim_mode. We did extensive testing of the i/o on the new\n> hardware (both data and log arrays) before it was put into service and\n> have done even more comprehensive testing since it came out of service.\n> The short version is that the disks on the new hardware are faster than\n> disks on the old server. In one test run we even set the server to write\n> WALs to shared memory instead of to the log LV just to help rule out i/o\n> problems and only saw a marginal improvement in overall TPS numbers.\n>\n> At this point we are extremely confident that if we have a configuration\n> problem, it is not with any of the usual postgresql.conf/sysctl.conf\n> suspects. We are pretty sure that the problem is being caused by the\n> hardware in some way but that it is not the result of a hardware failure\n> (e.g. degraded array, raid card self tests or what have you).\n>\n> Given that we're dealing with new hardware and the fact that this still\n> acts a lot like a NUMA issue, are there other settings we should be\n> adjusting to deal with possible performance problems associated with NUMA?\n>\n> Does this sound like something else entirely?\n>\n> Any thoughts appreciated.\n>\n\nOne piece of information that you didn't supply ... sorry if this is\nobvious, but did you run the usual range of performance tests using\npgbench, bonnie++ and so forth to confirm that the new server was working\nwell before you put it into production? Did it compare well on those same\ntests to your old hardware?\n\nCraig\n\n\n> thanks,\n> Steve\n>\n\nOn Fri, Mar 1, 2013 at 1:52 AM, Steven Crandell <[email protected]> wrote:\nRecently I moved my ~600G / ~15K TPS database from a 48 [email protected] server with 512GB RAM on 15K RPM diskto a newer server with 64 [email protected] server with 1T of RAM on 15K RPM disks\nThe move was from v9.1.4 to v9.1.8 (eventually also tested with v9.1.4 on the new hardware) and was done via base backup followed by slave promotion.All postgres configurations were matched exactly as were system and kernel parameters.\nOn the first day that this server saw production load levels it absolutely fell on its face. We ran an exhaustive battery of tests including failing over to the new (hardware matched) slave only to find the problem happening there also. \nAfter several engineers all confirmed that every postgres and system setting matched, we eventually migrated back onto the original hardware using exactly the same methods and settings that had been used while the data was on the new hardware. As soon as we brought the DB live on the older (supposedly slower) hardware, everything started running smoothly again. \nAs far as we were able to gather in the frantic moments of downtime, hundreds of queries were hanging up while trying to COMMIT. This in turn caused new queries backup as they waited for locks and so on. \nPrior to failing back to the original hardware, we found interesting posts about people having problems similar to ours due to NUMA and several suggested that they had solved their problem by setting vm.zone_reclaim_mode = 0 \nUnfortunately we experienced the exact same problems even after turning off the zone_reclaim_mode. We did extensive testing of the i/o on the new hardware (both data and log arrays) before it was put into service and have done even more comprehensive testing since it came out of service. The short version is that the disks on the new hardware are faster than disks on the old server. In one test run we even set the server to write WALs to shared memory instead of to the log LV just to help rule out i/o problems and only saw a marginal improvement in overall TPS numbers.\nAt this point we are extremely confident that if we have a configuration problem, it is not with any of the usual postgresql.conf/sysctl.conf suspects. We are pretty sure that the problem is being caused by the hardware in some way but that it is not the result of a hardware failure (e.g. degraded array, raid card self tests or what have you).\nGiven that we're dealing with new hardware and the fact that this still acts a lot like a NUMA issue, are there other settings we should be adjusting to deal with possible performance problems associated with NUMA?\nDoes this sound like something else entirely?Any thoughts appreciated.One piece of information that you didn't supply ... sorry if this is obvious, but did you run the usual range of performance tests using pgbench, bonnie++ and so forth to confirm that the new server was working well before you put it into production? Did it compare well on those same tests to your old hardware?\nCraigthanks,Steve",
"msg_date": "Fri, 1 Mar 2013 07:41:31 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "We saw the same performance problems when this new hardware was running\ncent 6.3 with a 2.6.32-279.19.1.el6.x86_64 kernel and when it was matched\nto the OS/kernel of the old hardware which was cent 5.8 with\na 2.6.18-308.11.1.el5 kernel.\n\nYes the new hardware was thoroughly tested with bonnie before being put\ninto services and has been tested since. We are unable to find any\ninteresting differences in our bonnie tests comparisons between the old and\nnew hardware. pgbench was not used prior to our discovery of the problem\nbut has been used extensively since. FWIW This server ran a zabbix\ndatabase (much lower load requirements) for a month without any problems\nprior to taking over as our primary production DB server.\n\nAfter quite a bit of trial and error we were able to find a pgbench test\n(2x 300 concurrent client sessions doing selects along with 1x 50\nconcurrent user session doing the standard pgbench query rotation) that\nshowed the new hardware under performing when compared to the old hardware\nto the tune of about a 1000 TPS difference (2300 to 1300) for the 50\nconcurrent user pgbench run and about a 1000 less TPS for each of the\nselect only runs (~24000 to ~23000). Less demanding tests would be handled\nequally well by both old and new servers. More demanding tests would tip\nboth old and new over with very similar efficacy.\n\nHopefully that fleshes things out a bit more.\nPlease let me know if I can provide additional information.\n\nthanks\nsteve\n\n\nOn Fri, Mar 1, 2013 at 8:41 AM, Craig James <[email protected]> wrote:\n\n> On Fri, Mar 1, 2013 at 1:52 AM, Steven Crandell <[email protected]\n> > wrote:\n>\n>> Recently I moved my ~600G / ~15K TPS database from a\n>> 48 [email protected] server with 512GB RAM on 15K RPM disk\n>> to a newer server with\n>> 64 [email protected] server with 1T of RAM on 15K RPM disks\n>>\n>> The move was from v9.1.4 to v9.1.8 (eventually also tested with v9.1.4 on\n>> the new hardware) and was done via base backup followed by slave promotion.\n>> All postgres configurations were matched exactly as were system and\n>> kernel parameters.\n>>\n>> On the first day that this server saw production load levels it\n>> absolutely fell on its face. We ran an exhaustive battery of tests\n>> including failing over to the new (hardware matched) slave only to find the\n>> problem happening there also.\n>>\n>> After several engineers all confirmed that every postgres and system\n>> setting matched, we eventually migrated back onto the original hardware\n>> using exactly the same methods and settings that had been used while the\n>> data was on the new hardware. As soon as we brought the DB live on the\n>> older (supposedly slower) hardware, everything started running smoothly\n>> again.\n>>\n>> As far as we were able to gather in the frantic moments of downtime,\n>> hundreds of queries were hanging up while trying to COMMIT. This in turn\n>> caused new queries backup as they waited for locks and so on.\n>>\n>> Prior to failing back to the original hardware, we found interesting\n>> posts about people having problems similar to ours due to NUMA and several\n>> suggested that they had solved their problem by setting\n>> vm.zone_reclaim_mode = 0\n>>\n>> Unfortunately we experienced the exact same problems even after turning\n>> off the zone_reclaim_mode. We did extensive testing of the i/o on the new\n>> hardware (both data and log arrays) before it was put into service and\n>> have done even more comprehensive testing since it came out of service.\n>> The short version is that the disks on the new hardware are faster than\n>> disks on the old server. In one test run we even set the server to write\n>> WALs to shared memory instead of to the log LV just to help rule out i/o\n>> problems and only saw a marginal improvement in overall TPS numbers.\n>>\n>> At this point we are extremely confident that if we have a configuration\n>> problem, it is not with any of the usual postgresql.conf/sysctl.conf\n>> suspects. We are pretty sure that the problem is being caused by the\n>> hardware in some way but that it is not the result of a hardware failure\n>> (e.g. degraded array, raid card self tests or what have you).\n>>\n>> Given that we're dealing with new hardware and the fact that this still\n>> acts a lot like a NUMA issue, are there other settings we should be\n>> adjusting to deal with possible performance problems associated with NUMA?\n>>\n>> Does this sound like something else entirely?\n>>\n>> Any thoughts appreciated.\n>>\n>\n> One piece of information that you didn't supply ... sorry if this is\n> obvious, but did you run the usual range of performance tests using\n> pgbench, bonnie++ and so forth to confirm that the new server was working\n> well before you put it into production? Did it compare well on those same\n> tests to your old hardware?\n>\n> Craig\n>\n>\n>> thanks,\n>> Steve\n>>\n>\n>\n\nWe saw the same performance problems when this new hardware was running cent 6.3 with a 2.6.32-279.19.1.el6.x86_64 kernel and when it was matched to the OS/kernel of the old hardware which was cent 5.8 with a 2.6.18-308.11.1.el5 kernel.\nYes the new hardware was thoroughly tested with bonnie before being put into services and has been tested since. We are unable to find any interesting differences in our bonnie tests comparisons between the old and new hardware. pgbench was not used prior to our discovery of the problem but has been used extensively since. FWIW This server ran a zabbix database (much lower load requirements) for a month without any problems prior to taking over as our primary production DB server. \nAfter quite a bit of trial and error we were able to find a pgbench test (2x 300 concurrent client sessions doing selects along with 1x 50 concurrent user session doing the standard pgbench query rotation) that showed the new hardware under performing when compared to the old hardware to the tune of about a 1000 TPS difference (2300 to 1300) for the 50 concurrent user pgbench run and about a 1000 less TPS for each of the select only runs (~24000 to ~23000). Less demanding tests would be handled equally well by both old and new servers. More demanding tests would tip both old and new over with very similar efficacy.\nHopefully that fleshes things out a bit more.Please let me know if I can provide additional information.thankssteve\nOn Fri, Mar 1, 2013 at 8:41 AM, Craig James <[email protected]> wrote:\nOn Fri, Mar 1, 2013 at 1:52 AM, Steven Crandell <[email protected]> wrote:\n\nRecently I moved my ~600G / ~15K TPS database from a 48 [email protected] server with 512GB RAM on 15K RPM diskto a newer server with 64 [email protected] server with 1T of RAM on 15K RPM disks\nThe move was from v9.1.4 to v9.1.8 (eventually also tested with v9.1.4 on the new hardware) and was done via base backup followed by slave promotion.All postgres configurations were matched exactly as were system and kernel parameters.\nOn the first day that this server saw production load levels it absolutely fell on its face. We ran an exhaustive battery of tests including failing over to the new (hardware matched) slave only to find the problem happening there also. \nAfter several engineers all confirmed that every postgres and system setting matched, we eventually migrated back onto the original hardware using exactly the same methods and settings that had been used while the data was on the new hardware. As soon as we brought the DB live on the older (supposedly slower) hardware, everything started running smoothly again. \nAs far as we were able to gather in the frantic moments of downtime, hundreds of queries were hanging up while trying to COMMIT. This in turn caused new queries backup as they waited for locks and so on. \nPrior to failing back to the original hardware, we found interesting posts about people having problems similar to ours due to NUMA and several suggested that they had solved their problem by setting vm.zone_reclaim_mode = 0 \nUnfortunately we experienced the exact same problems even after turning off the zone_reclaim_mode. We did extensive testing of the i/o on the new hardware (both data and log arrays) before it was put into service and have done even more comprehensive testing since it came out of service. The short version is that the disks on the new hardware are faster than disks on the old server. In one test run we even set the server to write WALs to shared memory instead of to the log LV just to help rule out i/o problems and only saw a marginal improvement in overall TPS numbers.\nAt this point we are extremely confident that if we have a configuration problem, it is not with any of the usual postgresql.conf/sysctl.conf suspects. We are pretty sure that the problem is being caused by the hardware in some way but that it is not the result of a hardware failure (e.g. degraded array, raid card self tests or what have you).\nGiven that we're dealing with new hardware and the fact that this still acts a lot like a NUMA issue, are there other settings we should be adjusting to deal with possible performance problems associated with NUMA?\nDoes this sound like something else entirely?Any thoughts appreciated.One piece of information that you didn't supply ... sorry if this is obvious, but did you run the usual range of performance tests using pgbench, bonnie++ and so forth to confirm that the new server was working well before you put it into production? Did it compare well on those same tests to your old hardware?\nCraigthanks,Steve",
"msg_date": "Fri, 1 Mar 2013 09:49:54 -0700",
"msg_from": "Steven Crandell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "On Fri, Mar 1, 2013 at 9:49 AM, Steven Crandell\n<[email protected]> wrote:\n> We saw the same performance problems when this new hardware was running cent\n> 6.3 with a 2.6.32-279.19.1.el6.x86_64 kernel and when it was matched to the\n> OS/kernel of the old hardware which was cent 5.8 with a 2.6.18-308.11.1.el5\n> kernel.\n>\n> Yes the new hardware was thoroughly tested with bonnie before being put into\n> services and has been tested since. We are unable to find any interesting\n> differences in our bonnie tests comparisons between the old and new\n> hardware. pgbench was not used prior to our discovery of the problem but\n> has been used extensively since. FWIW This server ran a zabbix database\n> (much lower load requirements) for a month without any problems prior to\n> taking over as our primary production DB server.\n>\n> After quite a bit of trial and error we were able to find a pgbench test (2x\n> 300 concurrent client sessions doing selects along with 1x 50 concurrent\n> user session doing the standard pgbench query rotation) that showed the new\n> hardware under performing when compared to the old hardware to the tune of\n> about a 1000 TPS difference (2300 to 1300) for the 50 concurrent user\n> pgbench run and about a 1000 less TPS for each of the select only runs\n> (~24000 to ~23000). Less demanding tests would be handled equally well by\n> both old and new servers. More demanding tests would tip both old and new\n> over with very similar efficacy.\n>\n> Hopefully that fleshes things out a bit more.\n> Please let me know if I can provide additional information.\n\nOK I'd recommend testing with various numbers of clients and seeing\nwhat kind of shape you get from the curve when you plot it. I.e. does\nit fall off really hard at some number etc? If the old server\ndegrades more gracefully under very heavy load it may be that you're\njust admitting too many connections for the new one etc, not hitting\nits sweet spot.\n\nFWIW, the newest intel 10 core xeons and their cousins just barely\nkeep up with or beat the 8 or 12 core AMD Opterons from 3 years ago in\nmost of my testing. They look great on paper, but under heavy load\nthey are luck to keep up most the time.\n\nThere's also the possibility that even though you've turned off zone\nreclaim that your new hardware is still running in a numa mode that\nmakes internode communication much more expensive and that's costing\nyou money. This may especially be true with 1TB of memory that it's\nboth running at a lower speed AND internode connection costs are much\nhigher. use the numactl command (I think that's it) to see what the\ninternode costs are, and compare it to the old hardware. IF the\ninternode comm costs are really high, see if you can turn off numa in\nthe BIOS and if it gets somewhat better.\n\nOf course check the usual, that your battery backed cache is really\nworking in write back not write through etc.\n\nGood luck. Acceptance testing can really suck when newer, supposedly\nfaster hardware is in fact slower.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 1 Mar 2013 23:51:21 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "\nOn 01/03/2013, at 10.52, Steven Crandell <[email protected]> wrote:\n\n> Recently I moved my ~600G / ~15K TPS database from a \n> 48 [email protected] server with 512GB RAM on 15K RPM disk\n> to a newer server with \n> 64 [email protected] server with 1T of RAM on 15K RPM disks\n> \n> The move was from v9.1.4 to v9.1.8 (eventually also tested with v9.1.4 on the new hardware) and was done via base backup followed by slave promotion.\n> All postgres configurations were matched exactly as were system and kernel parameters.\n> \n\nmy guess is that you have gone down in clockfrequency on memory when you doubled the amount of memory \n\nin a mainly memory cached database the performance is extremely sensitive to memory speed\n\nJesper\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 2 Mar 2013 08:30:22 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "Steven,\n\n> We saw the same performance problems when this new hardware was running\n> cent 6.3 with a 2.6.32-279.19.1.el6.x86_64 kernel and when it was matched\n> to the OS/kernel of the old hardware which was cent 5.8 with\n> a 2.6.18-308.11.1.el5 kernel.\n\nOh, now that's interesting. We've been seeing the same issue (IO stalls\non COMMIT) ond had attributed it to some bugs in the 3.2 and 3.4\nkernels, partly because we don't have a credible \"old server\" to test\nagainst. Now you have me wondering if there's not a hardware or driver\nissue with a major HW manufacturer which just happens to be hitting\naround now.\n\nCan you detail your hardware stack so that we can compare notes?\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 03 Mar 2013 12:16:07 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "On 03/03/2013 03:16 PM, Josh Berkus wrote:\n> Steven,\n> \n>> We saw the same performance problems when this new hardware was running\n>> cent 6.3 with a 2.6.32-279.19.1.el6.x86_64 kernel and when it was matched\n>> to the OS/kernel of the old hardware which was cent 5.8 with\n>> a 2.6.18-308.11.1.el5 kernel.\n> \n> Oh, now that's interesting. We've been seeing the same issue (IO stalls\n> on COMMIT) ond had attributed it to some bugs in the 3.2 and 3.4\n> kernels, partly because we don't have a credible \"old server\" to test\n> against. Now you have me wondering if there's not a hardware or driver\n> issue with a major HW manufacturer which just happens to be hitting\n> around now.\n> \n> Can you detail your hardware stack so that we can compare notes?\n> \n> \nThe current Red Hat Enterprise Linux 6.4 kernel is\nkernel-2.6.32-358.0.1.el6.x86_64\n\nin case that matters.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 03 Mar 2013 15:34:53 -0500",
"msg_from": "Jean-David Beyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "Here's our hardware break down.\n\nThe logvg on the new hardware is 30MB/s slower (170 MB/s vs 200 MB/s )\nthan the logvg on the older hardware which was an immediately interesting\ndifference but we have yet to be able to create a test scenario that\nsuccessfully implicates this slower log speed in our problems. That is\nsomething we are actively working on.\n\n\nOld server hardware:\n Manufacturer: Dell Inc.\n Product Name: PowerEdge R810\n 4x Intel(R) Xeon(R) CPU E7540 @ 2.00GHz\n 32x16384 MB 1066 MHz DDR3\n Controller 0: PERC H700 - 2 disk RAID-1 278.88 GB rootvg\n Controller 1: PERC H800 - 18 disk RAID-6 2,178.00 GB datavg, 4\ndrive RAID-10 272.25 GB logvg, 2 hot spare\n 2x 278.88 GB 15K SAS on controller 0\n 24x 136.13 GB 15K SAS on controller 1\n\nNew server hardware:\n Manufacturer: Dell Inc.\n Product Name: PowerEdge R820\n 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n 32x32 GB 1333 MHz DDR3\n Controller 0: PERC H710P - 4 disk RAID-6 557.75 GB rootvg\n Controller 1: PERC H810 - 20 disk RAID-60 4,462.00 GB datavg, 2\ndisk RAID-1 278.88 GB logvg, 2 hot spare\n 28x278.88 GB 15K SAS drives total.\n\n\nOn Sun, Mar 3, 2013 at 1:34 PM, Jean-David Beyer <[email protected]>wrote:\n\n> On 03/03/2013 03:16 PM, Josh Berkus wrote:\n> > Steven,\n> >\n> >> We saw the same performance problems when this new hardware was running\n> >> cent 6.3 with a 2.6.32-279.19.1.el6.x86_64 kernel and when it was\n> matched\n> >> to the OS/kernel of the old hardware which was cent 5.8 with\n> >> a 2.6.18-308.11.1.el5 kernel.\n> >\n> > Oh, now that's interesting. We've been seeing the same issue (IO stalls\n> > on COMMIT) ond had attributed it to some bugs in the 3.2 and 3.4\n> > kernels, partly because we don't have a credible \"old server\" to test\n> > against. Now you have me wondering if there's not a hardware or driver\n> > issue with a major HW manufacturer which just happens to be hitting\n> > around now.\n> >\n> > Can you detail your hardware stack so that we can compare notes?\n> >\n> >\n> The current Red Hat Enterprise Linux 6.4 kernel is\n> kernel-2.6.32-358.0.1.el6.x86_64\n>\n> in case that matters.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHere's our hardware break down.The logvg on the new hardware is 30MB/s slower (170 MB/s vs 200 MB/s ) than the logvg on the older hardware which was an immediately interesting difference but we have yet to be able to create a test scenario that successfully implicates this slower log speed in our problems. That is something we are actively working on.\nOld server hardware: Manufacturer: Dell Inc. Product Name: PowerEdge R810\n 4x Intel(R) Xeon(R) CPU E7540 @ 2.00GHz\n 32x16384 MB 1066 MHz DDR3\n Controller 0: PERC H700 - 2 disk RAID-1 278.88 GB rootvg\n Controller 1: PERC H800 - 18 disk RAID-6 2,178.00 GB datavg, 4 \ndrive RAID-10 272.25 GB logvg, 2 hot spare\n 2x 278.88 GB 15K SAS on controller 0\n 24x 136.13 GB 15K SAS on controller 1 New server hardware: Manufacturer: Dell Inc.\n Product Name: PowerEdge R820\n 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n 32x32 GB 1333 MHz DDR3\n Controller 0: PERC H710P - 4 disk RAID-6 557.75 GB rootvg\n Controller 1: PERC H810 - 20 disk RAID-60 4,462.00 GB \ndatavg, 2 disk RAID-1 278.88 GB logvg, 2 hot spare\n 28x278.88 GB 15K SAS drives total. On Sun, Mar 3, 2013 at 1:34 PM, Jean-David Beyer <[email protected]> wrote:\nOn 03/03/2013 03:16 PM, Josh Berkus wrote:\n> Steven,\n>\n>> We saw the same performance problems when this new hardware was running\n>> cent 6.3 with a 2.6.32-279.19.1.el6.x86_64 kernel and when it was matched\n>> to the OS/kernel of the old hardware which was cent 5.8 with\n>> a 2.6.18-308.11.1.el5 kernel.\n>\n> Oh, now that's interesting. We've been seeing the same issue (IO stalls\n> on COMMIT) ond had attributed it to some bugs in the 3.2 and 3.4\n> kernels, partly because we don't have a credible \"old server\" to test\n> against. Now you have me wondering if there's not a hardware or driver\n> issue with a major HW manufacturer which just happens to be hitting\n> around now.\n>\n> Can you detail your hardware stack so that we can compare notes?\n>\n>\nThe current Red Hat Enterprise Linux 6.4 kernel is\nkernel-2.6.32-358.0.1.el6.x86_64\n\nin case that matters.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 4 Mar 2013 15:54:40 -0700",
"msg_from": "Steven Crandell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "On 05/03/13 11:54, Steven Crandell wrote:\n> Here's our hardware break down.\n>\n> The logvg on the new hardware is 30MB/s slower (170 MB/s vs 200 MB/s )\n> than the logvg on the older hardware which was an immediately interesting\n> difference but we have yet to be able to create a test scenario that\n> successfully implicates this slower log speed in our problems. That is\n> something we are actively working on.\n>\n>\n> Old server hardware:\n> Manufacturer: Dell Inc.\n> Product Name: PowerEdge R810\n> 4x Intel(R) Xeon(R) CPU E7540 @ 2.00GHz\n> 32x16384 MB 1066 MHz DDR3\n> Controller 0: PERC H700 - 2 disk RAID-1 278.88 GB rootvg\n> Controller 1: PERC H800 - 18 disk RAID-6 2,178.00 GB datavg, 4\n> drive RAID-10 272.25 GB logvg, 2 hot spare\n> 2x 278.88 GB 15K SAS on controller 0\n> 24x 136.13 GB 15K SAS on controller 1\n>\n> New server hardware:\n> Manufacturer: Dell Inc.\n> Product Name: PowerEdge R820\n> 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n> 32x32 GB 1333 MHz DDR3\n> Controller 0: PERC H710P - 4 disk RAID-6 557.75 GB rootvg\n> Controller 1: PERC H810 - 20 disk RAID-60 4,462.00 GB datavg, 2\n> disk RAID-1 278.88 GB logvg, 2 hot spare\n> 28x278.88 GB 15K SAS drives total.\n>\n>\n> On Sun, Mar 3, 2013 at 1:34 PM, Jean-David Beyer <[email protected]>wrote:\n>\n\nRight - It is probably worth running 'pg_test_fsync' on the two logvg's \nand comparing the results. This will tell you if the commit latency is \nsimilar or not on the two disk systems.\n\nOne other difference that springs immediately to mind is that datavg is \nan 18 disk RAID 6 on the old system and a 20 disk RAID 60 on the new \none...so you have about 1/2 the io performance right there.\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 05 Mar 2013 12:09:41 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "On Fri, Mar 1, 2013 at 1:52 AM, Steven Crandell\n<[email protected]> wrote:\n> As far as we were able to gather in the frantic moments of downtime,\n> hundreds of queries were hanging up while trying to COMMIT. This in turn\n> caused new queries backup as they waited for locks and so on.\n>\n> Given that we're dealing with new hardware and the fact that this still acts\n> a lot like a NUMA issue, are there other settings we should be adjusting to\n> deal with possible performance problems associated with NUMA?\n>\n> Does this sound like something else entirely?\n\nIt does. I collected a number of kernel (and not only) tuning issues\nwith short explanations to prevent it from affecting database behavior\nbadly. Try to follow them:\n\nhttps://code.google.com/p/pgcookbook/wiki/Database_Server_Configuration\n\n--\nSergey Konoplev\nDatabase and Software Architect\nhttp://www.linkedin.com/in/grayhemp\n\nPhones:\nUSA +1 415 867 9984\nRussia, Moscow +7 901 903 0499\nRussia, Krasnodar +7 988 888 1979\n\nSkype: gray-hemp\nJabber: [email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Mar 2013 15:11:29 -0800",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "On Mon, Mar 04, 2013 at 03:54:40PM -0700, Steven Crandell wrote:\n> Here's our hardware break down.\n> \n> The logvg on the new hardware is 30MB/s slower (170 MB/s vs 200 MB/s )\n> than the logvg on the older hardware which was an immediately interesting\n> difference but we have yet to be able to create a test scenario that\n> successfully implicates this slower log speed in our problems. That is\n> something we are actively working on.\n> \n> \n> Old server hardware:\n> Manufacturer: Dell Inc.\n> Product Name: PowerEdge R810\n> 4x Intel(R) Xeon(R) CPU E7540 @ 2.00GHz\n> 32x16384 MB 1066 MHz DDR3\n> Controller 0: PERC H700 - 2 disk RAID-1 278.88 GB rootvg\n> Controller 1: PERC H800 - 18 disk RAID-6 2,178.00 GB datavg, 4\n> drive RAID-10 272.25 GB logvg, 2 hot spare\n> 2x 278.88 GB 15K SAS on controller 0\n> 24x 136.13 GB 15K SAS on controller 1\n> \n> New server hardware:\n> Manufacturer: Dell Inc.\n> Product Name: PowerEdge R820\n> 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n> 32x32 GB 1333 MHz DDR3\n> Controller 0: PERC H710P - 4 disk RAID-6 557.75 GB rootvg\n> Controller 1: PERC H810 - 20 disk RAID-60 4,462.00 GB datavg, 2\n> disk RAID-1 278.88 GB logvg, 2 hot spare\n> 28x278.88 GB 15K SAS drives total.\n\nHmm, you went from a striped (raid 1/0) log volume on the old hardware\nto a non-striped (raid 1) volume on the new hardware. That could\nexplain the speed drop. Are the disks the same speed for the two\nsystems?\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Mar 2013 23:17:17 +0000",
"msg_from": "John Rouillard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "On Mon, Mar 4, 2013 at 4:17 PM, John Rouillard <[email protected]> wrote:\n> On Mon, Mar 04, 2013 at 03:54:40PM -0700, Steven Crandell wrote:\n>> Here's our hardware break down.\n>>\n>> The logvg on the new hardware is 30MB/s slower (170 MB/s vs 200 MB/s )\n>> than the logvg on the older hardware which was an immediately interesting\n>> difference but we have yet to be able to create a test scenario that\n>> successfully implicates this slower log speed in our problems. That is\n>> something we are actively working on.\n>>\n>>\n>> Old server hardware:\n>> Manufacturer: Dell Inc.\n>> Product Name: PowerEdge R810\n>> 4x Intel(R) Xeon(R) CPU E7540 @ 2.00GHz\n>> 32x16384 MB 1066 MHz DDR3\n>> Controller 0: PERC H700 - 2 disk RAID-1 278.88 GB rootvg\n>> Controller 1: PERC H800 - 18 disk RAID-6 2,178.00 GB datavg, 4\n>> drive RAID-10 272.25 GB logvg, 2 hot spare\n>> 2x 278.88 GB 15K SAS on controller 0\n>> 24x 136.13 GB 15K SAS on controller 1\n>>\n>> New server hardware:\n>> Manufacturer: Dell Inc.\n>> Product Name: PowerEdge R820\n>> 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n>> 32x32 GB 1333 MHz DDR3\n>> Controller 0: PERC H710P - 4 disk RAID-6 557.75 GB rootvg\n>> Controller 1: PERC H810 - 20 disk RAID-60 4,462.00 GB datavg, 2\n>> disk RAID-1 278.88 GB logvg, 2 hot spare\n>> 28x278.88 GB 15K SAS drives total.\n>\n> Hmm, you went from a striped (raid 1/0) log volume on the old hardware\n> to a non-striped (raid 1) volume on the new hardware. That could\n> explain the speed drop. Are the disks the same speed for the two\n> systems?\n\nYeah that's a terrible tradeoff there. Just throw 4 disks in a\nRAID-10 instead of RAID-60. With 4 disks you'll get the same storage\nand much better performance from RAID-10.\n\nAlso consider using larger drives and a RAID-10 for your big drive\narray. RAID-6 or RAID-60 is notoriously slow for databases,\nespecially for random access.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Mar 2013 16:46:21 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "Mark,\nI ran pg_fsync_test on log and data LV's on both old and new hardware.\n\nNew hardware out performed old on every measurable on the log LV\n\nSame for the data LV's except for the 16kB open_sync write where the old\nhardware edged out the new by a hair (18649 vs 17999 ops/sec)\nand write, fsync, close where they were effectively tied.\n\n\n\n\nOn Mon, Mar 4, 2013 at 4:17 PM, John Rouillard <[email protected]> wrote:\n\n> On Mon, Mar 04, 2013 at 03:54:40PM -0700, Steven Crandell wrote:\n> > Here's our hardware break down.\n> >\n> > The logvg on the new hardware is 30MB/s slower (170 MB/s vs 200 MB/s )\n> > than the logvg on the older hardware which was an immediately interesting\n> > difference but we have yet to be able to create a test scenario that\n> > successfully implicates this slower log speed in our problems. That is\n> > something we are actively working on.\n> >\n> >\n> > Old server hardware:\n> > Manufacturer: Dell Inc.\n> > Product Name: PowerEdge R810\n> > 4x Intel(R) Xeon(R) CPU E7540 @ 2.00GHz\n> > 32x16384 MB 1066 MHz DDR3\n> > Controller 0: PERC H700 - 2 disk RAID-1 278.88 GB rootvg\n> > Controller 1: PERC H800 - 18 disk RAID-6 2,178.00 GB datavg, 4\n> > drive RAID-10 272.25 GB logvg, 2 hot spare\n> > 2x 278.88 GB 15K SAS on controller 0\n> > 24x 136.13 GB 15K SAS on controller 1\n> >\n> > New server hardware:\n> > Manufacturer: Dell Inc.\n> > Product Name: PowerEdge R820\n> > 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n> > 32x32 GB 1333 MHz DDR3\n> > Controller 0: PERC H710P - 4 disk RAID-6 557.75 GB rootvg\n> > Controller 1: PERC H810 - 20 disk RAID-60 4,462.00 GB datavg,\n> 2\n> > disk RAID-1 278.88 GB logvg, 2 hot spare\n> > 28x278.88 GB 15K SAS drives total.\n>\n> Hmm, you went from a striped (raid 1/0) log volume on the old hardware\n> to a non-striped (raid 1) volume on the new hardware. That could\n> explain the speed drop. Are the disks the same speed for the two\n> systems?\n>\n> --\n> -- rouilj\n>\n> John Rouillard System Administrator\n> Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nMark,I ran pg_fsync_test on log and data LV's on both old and new hardware. New hardware out performed old on every measurable on the log LV\nSame for the data LV's except for the 16kB open_sync write where the old hardware edged out the new by a hair (18649 vs 17999 ops/sec) and write, fsync, close where they were effectively tied.\nOn Mon, Mar 4, 2013 at 4:17 PM, John Rouillard <[email protected]> wrote:\nOn Mon, Mar 04, 2013 at 03:54:40PM -0700, Steven Crandell wrote:\n> Here's our hardware break down.\n>\n> The logvg on the new hardware is 30MB/s slower (170 MB/s vs 200 MB/s )\n> than the logvg on the older hardware which was an immediately interesting\n> difference but we have yet to be able to create a test scenario that\n> successfully implicates this slower log speed in our problems. That is\n> something we are actively working on.\n>\n>\n> Old server hardware:\n> Manufacturer: Dell Inc.\n> Product Name: PowerEdge R810\n> 4x Intel(R) Xeon(R) CPU E7540 @ 2.00GHz\n> 32x16384 MB 1066 MHz DDR3\n> Controller 0: PERC H700 - 2 disk RAID-1 278.88 GB rootvg\n> Controller 1: PERC H800 - 18 disk RAID-6 2,178.00 GB datavg, 4\n> drive RAID-10 272.25 GB logvg, 2 hot spare\n> 2x 278.88 GB 15K SAS on controller 0\n> 24x 136.13 GB 15K SAS on controller 1\n>\n> New server hardware:\n> Manufacturer: Dell Inc.\n> Product Name: PowerEdge R820\n> 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n> 32x32 GB 1333 MHz DDR3\n> Controller 0: PERC H710P - 4 disk RAID-6 557.75 GB rootvg\n> Controller 1: PERC H810 - 20 disk RAID-60 4,462.00 GB datavg, 2\n> disk RAID-1 278.88 GB logvg, 2 hot spare\n> 28x278.88 GB 15K SAS drives total.\n\nHmm, you went from a striped (raid 1/0) log volume on the old hardware\nto a non-striped (raid 1) volume on the new hardware. That could\nexplain the speed drop. Are the disks the same speed for the two\nsystems?\n\n--\n -- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 4 Mar 2013 16:47:55 -0700",
"msg_from": "Steven Crandell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "I'd be more interested in the random results from bonnie++ but my real\nworld experience tells me that for heavily parallel writes etc a\nRAID-10 will stomp a RAID-6 or RAID-60 on the same number of drives.\n\nOn Mon, Mar 4, 2013 at 4:47 PM, Steven Crandell\n<[email protected]> wrote:\n> Mark,\n> I ran pg_fsync_test on log and data LV's on both old and new hardware.\n>\n> New hardware out performed old on every measurable on the log LV\n>\n> Same for the data LV's except for the 16kB open_sync write where the old\n> hardware edged out the new by a hair (18649 vs 17999 ops/sec)\n> and write, fsync, close where they were effectively tied.\n>\n>\n>\n>\n> On Mon, Mar 4, 2013 at 4:17 PM, John Rouillard <[email protected]> wrote:\n>>\n>> On Mon, Mar 04, 2013 at 03:54:40PM -0700, Steven Crandell wrote:\n>> > Here's our hardware break down.\n>> >\n>> > The logvg on the new hardware is 30MB/s slower (170 MB/s vs 200 MB/s )\n>> > than the logvg on the older hardware which was an immediately\n>> > interesting\n>> > difference but we have yet to be able to create a test scenario that\n>> > successfully implicates this slower log speed in our problems. That is\n>> > something we are actively working on.\n>> >\n>> >\n>> > Old server hardware:\n>> > Manufacturer: Dell Inc.\n>> > Product Name: PowerEdge R810\n>> > 4x Intel(R) Xeon(R) CPU E7540 @ 2.00GHz\n>> > 32x16384 MB 1066 MHz DDR3\n>> > Controller 0: PERC H700 - 2 disk RAID-1 278.88 GB rootvg\n>> > Controller 1: PERC H800 - 18 disk RAID-6 2,178.00 GB datavg, 4\n>> > drive RAID-10 272.25 GB logvg, 2 hot spare\n>> > 2x 278.88 GB 15K SAS on controller 0\n>> > 24x 136.13 GB 15K SAS on controller 1\n>> >\n>> > New server hardware:\n>> > Manufacturer: Dell Inc.\n>> > Product Name: PowerEdge R820\n>> > 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n>> > 32x32 GB 1333 MHz DDR3\n>> > Controller 0: PERC H710P - 4 disk RAID-6 557.75 GB rootvg\n>> > Controller 1: PERC H810 - 20 disk RAID-60 4,462.00 GB datavg,\n>> > 2\n>> > disk RAID-1 278.88 GB logvg, 2 hot spare\n>> > 28x278.88 GB 15K SAS drives total.\n>>\n>> Hmm, you went from a striped (raid 1/0) log volume on the old hardware\n>> to a non-striped (raid 1) volume on the new hardware. That could\n>> explain the speed drop. Are the disks the same speed for the two\n>> systems?\n>>\n>> --\n>> -- rouilj\n>>\n>> John Rouillard System Administrator\n>> Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Mar 2013 18:48:19 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware upgrade, performance degrade?"
},
{
"msg_contents": "Scott,\n\nLong story short, yes I agree, the raids are all kinds of wrong and if I\nhad been involved in the build processes they would look very different\nright now.\n\nI have been playing around with different bs= and count= settings for some\nsimple dd tests tonight and found some striking differences that finally\nshow the new hardware under performing (significantly!) when compared to\nthe old hardware.\n\nThat said, we are still struggling to find a postgres-specific test that\nyields sufficiently different results on old and new hardware that it could\nserve as an indicator that we had solved the problem prior to shoving this\nbox back into production service and crossing our fingers. The moment we\nfix the raids, we give away a prime testing scenario. Tomorrow I plan to\nsplit the difference and fix the raids on one of the new boxes and not the\nother.\n\nWe are also working on capturing prod logs for playback but that is proving\nnon-trivial due to our existing performance bottleneck and\nsome eccentricities associated with our application.\n\nmore to come on this hopefully\n\n\nmany thanks for all of the insights thus far.\n\n\nOn Mon, Mar 4, 2013 at 6:48 PM, Scott Marlowe <[email protected]>wrote:\n\n> I'd be more interested in the random results from bonnie++ but my real\n> world experience tells me that for heavily parallel writes etc a\n> RAID-10 will stomp a RAID-6 or RAID-60 on the same number of drives.\n>\n> On Mon, Mar 4, 2013 at 4:47 PM, Steven Crandell\n> <[email protected]> wrote:\n> > Mark,\n> > I ran pg_fsync_test on log and data LV's on both old and new hardware.\n> >\n> > New hardware out performed old on every measurable on the log LV\n> >\n> > Same for the data LV's except for the 16kB open_sync write where the old\n> > hardware edged out the new by a hair (18649 vs 17999 ops/sec)\n> > and write, fsync, close where they were effectively tied.\n> >\n> >\n> >\n> >\n> > On Mon, Mar 4, 2013 at 4:17 PM, John Rouillard <[email protected]>\n> wrote:\n> >>\n> >> On Mon, Mar 04, 2013 at 03:54:40PM -0700, Steven Crandell wrote:\n> >> > Here's our hardware break down.\n> >> >\n> >> > The logvg on the new hardware is 30MB/s slower (170 MB/s vs 200 MB/s\n> )\n> >> > than the logvg on the older hardware which was an immediately\n> >> > interesting\n> >> > difference but we have yet to be able to create a test scenario that\n> >> > successfully implicates this slower log speed in our problems. That is\n> >> > something we are actively working on.\n> >> >\n> >> >\n> >> > Old server hardware:\n> >> > Manufacturer: Dell Inc.\n> >> > Product Name: PowerEdge R810\n> >> > 4x Intel(R) Xeon(R) CPU E7540 @ 2.00GHz\n> >> > 32x16384 MB 1066 MHz DDR3\n> >> > Controller 0: PERC H700 - 2 disk RAID-1 278.88 GB rootvg\n> >> > Controller 1: PERC H800 - 18 disk RAID-6 2,178.00 GB datavg, 4\n> >> > drive RAID-10 272.25 GB logvg, 2 hot spare\n> >> > 2x 278.88 GB 15K SAS on controller 0\n> >> > 24x 136.13 GB 15K SAS on controller 1\n> >> >\n> >> > New server hardware:\n> >> > Manufacturer: Dell Inc.\n> >> > Product Name: PowerEdge R820\n> >> > 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n> >> > 32x32 GB 1333 MHz DDR3\n> >> > Controller 0: PERC H710P - 4 disk RAID-6 557.75 GB rootvg\n> >> > Controller 1: PERC H810 - 20 disk RAID-60 4,462.00 GB\n> datavg,\n> >> > 2\n> >> > disk RAID-1 278.88 GB logvg, 2 hot spare\n> >> > 28x278.88 GB 15K SAS drives total.\n> >>\n> >> Hmm, you went from a striped (raid 1/0) log volume on the old hardware\n> >> to a non-striped (raid 1) volume on the new hardware. That could\n> >> explain the speed drop. Are the disks the same speed for the two\n> >> systems?\n> >>\n> >> --\n> >> -- rouilj\n> >>\n> >> John Rouillard System Administrator\n> >> Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n> >>\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list (\n> [email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >\n> >\n>\n>\n>\n> --\n> To understand recursion, one must first understand recursion.\n>\n\nScott,Long story short, yes I agree, the raids are all kinds of wrong and if I had been involved in the build processes they would look very different right now. \nI have been playing around with different bs= and count= settings for some simple dd tests tonight and found some striking differences that finally show the new hardware under performing (significantly!) when compared to the old hardware.\nThat said, we are still struggling to find a postgres-specific test that yields sufficiently different results on old and new hardware that it could serve as an indicator that we had solved the problem prior to shoving this box back into production service and crossing our fingers. The moment we fix the raids, we give away a prime testing scenario. Tomorrow I plan to split the difference and fix the raids on one of the new boxes and not the other.\nWe are also working on capturing prod logs for playback but that is proving non-trivial due to our existing performance bottleneck and some eccentricities associated with our application.\nmore to come on this hopefullymany thanks for all of the insights thus far.\nOn Mon, Mar 4, 2013 at 6:48 PM, Scott Marlowe <[email protected]> wrote:\nI'd be more interested in the random results from bonnie++ but my real\nworld experience tells me that for heavily parallel writes etc a\nRAID-10 will stomp a RAID-6 or RAID-60 on the same number of drives.\n\nOn Mon, Mar 4, 2013 at 4:47 PM, Steven Crandell\n<[email protected]> wrote:\n> Mark,\n> I ran pg_fsync_test on log and data LV's on both old and new hardware.\n>\n> New hardware out performed old on every measurable on the log LV\n>\n> Same for the data LV's except for the 16kB open_sync write where the old\n> hardware edged out the new by a hair (18649 vs 17999 ops/sec)\n> and write, fsync, close where they were effectively tied.\n>\n>\n>\n>\n> On Mon, Mar 4, 2013 at 4:17 PM, John Rouillard <[email protected]> wrote:\n>>\n>> On Mon, Mar 04, 2013 at 03:54:40PM -0700, Steven Crandell wrote:\n>> > Here's our hardware break down.\n>> >\n>> > The logvg on the new hardware is 30MB/s slower (170 MB/s vs 200 MB/s )\n>> > than the logvg on the older hardware which was an immediately\n>> > interesting\n>> > difference but we have yet to be able to create a test scenario that\n>> > successfully implicates this slower log speed in our problems. That is\n>> > something we are actively working on.\n>> >\n>> >\n>> > Old server hardware:\n>> > Manufacturer: Dell Inc.\n>> > Product Name: PowerEdge R810\n>> > 4x Intel(R) Xeon(R) CPU E7540 @ 2.00GHz\n>> > 32x16384 MB 1066 MHz DDR3\n>> > Controller 0: PERC H700 - 2 disk RAID-1 278.88 GB rootvg\n>> > Controller 1: PERC H800 - 18 disk RAID-6 2,178.00 GB datavg, 4\n>> > drive RAID-10 272.25 GB logvg, 2 hot spare\n>> > 2x 278.88 GB 15K SAS on controller 0\n>> > 24x 136.13 GB 15K SAS on controller 1\n>> >\n>> > New server hardware:\n>> > Manufacturer: Dell Inc.\n>> > Product Name: PowerEdge R820\n>> > 4x Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz\n>> > 32x32 GB 1333 MHz DDR3\n>> > Controller 0: PERC H710P - 4 disk RAID-6 557.75 GB rootvg\n>> > Controller 1: PERC H810 - 20 disk RAID-60 4,462.00 GB datavg,\n>> > 2\n>> > disk RAID-1 278.88 GB logvg, 2 hot spare\n>> > 28x278.88 GB 15K SAS drives total.\n>>\n>> Hmm, you went from a striped (raid 1/0) log volume on the old hardware\n>> to a non-striped (raid 1) volume on the new hardware. That could\n>> explain the speed drop. Are the disks the same speed for the two\n>> systems?\n>>\n>> --\n>> -- rouilj\n>>\n>> John Rouillard System Administrator\n>> Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n--\nTo understand recursion, one must first understand recursion.",
"msg_date": "Mon, 4 Mar 2013 20:05:27 -0700",
"msg_from": "Steven Crandell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hardware upgrade, performance degrade?"
}
] |
[
{
"msg_contents": "Hi, I'm going to setup a new server for my postgresql database, and I am considering one of these: http://www.hetzner.de/hosting/produkte_rootserver/poweredge-r720 with four SAS drives in a RAID 10 array. Has any of you any particular comments/pitfalls/etc. to mention on the setup? My application is very write heavy.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 1 Mar 2013 12:43:17 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "New server setup"
},
{
"msg_contents": "On Fri, Mar 1, 2013 at 3:43 AM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n> Hi, I'm going to setup a new server for my postgresql database, and I am\n> considering one of these:\n> http://www.hetzner.de/hosting/produkte_rootserver/poweredge-r720 with\n> four SAS drives in a RAID 10 array. Has any of you any particular\n> comments/pitfalls/etc. to mention on the setup? My application is very\n> write heavy.\n>\n\nI can only tell you our experience with Dell from several years ago. We\nbought two Dell servers similar (somewhat larger) than the model you're\nlooking at. We'll never buy from them again.\n\nAdvantages: They work. They haven't failed.\n\nDisadvantages:\n\nPerformance sucks. Dell costs far more than \"white box\" servers we buy\nfrom a \"white box\" supplier (ASA Computers). ASA gives us roughly double\nthe performance for the same price. We can buy exactly what we want from\nASA.\n\nDell did a disk-drive \"lock in.\" The RAID controller won't spin up a\nnon-Dell disk. They wanted roughly four times the price for their disks\ncompared to buying the exact same disks on Amazon. If a disk went out\ntoday, it would probably cost even more because that model is obsolete\n(luckily, we bought a couple spares). I think they abandoned this policy\nbecause it caused so many complaints, but you should check before you buy.\nThis was an incredibly stupid RAID controller design.\n\nDell tech support doesn't know what they're talking about when it comes to\nRAID controllers and serious server support. You're better off with a\nwhite-box solution, where you can buy the exact parts recommended in this\ngroup and get technical advice from people who know what they're talking\nabout. Dell basically doesn't understand Postgres.\n\nThey boast excellent on-site service, but for the price of their computers\nand their service contract, you can buy two servers from a white-box\nvendor. Our white-box servers have been just as reliable as the Dell\nservers -- no failures.\n\nI'm sure someone in Europe can recommend a good vendor for you.\n\nCraig James\n\n\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Fri, Mar 1, 2013 at 3:43 AM, Niels Kristian Schjødt <[email protected]> wrote:\nHi, I'm going to setup a new server for my postgresql database, and I am considering one of these: http://www.hetzner.de/hosting/produkte_rootserver/poweredge-r720 with four SAS drives in a RAID 10 array. Has any of you any particular comments/pitfalls/etc. to mention on the setup? My application is very write heavy.\nI can only tell you our experience with Dell from several years ago. We\n bought two Dell servers similar (somewhat larger) than the model you're\n looking at. We'll never buy from them again.\nAdvantages: They work. They haven't failed.Disadvantages: Performance\n sucks. Dell costs far more than \"white box\" servers we buy from a \n\"white box\" supplier (ASA Computers). ASA gives us roughly double the \nperformance for the same price. We can buy exactly what we want from \nASA.\nDell did a disk-drive \"lock in.\" The RAID controller won't spin up a\n non-Dell disk. They wanted roughly four times the price for their \ndisks compared to buying the exact same disks on Amazon. If a disk went\n out today, it would probably cost even more because that model is \nobsolete (luckily, we bought a couple spares). I think they abandoned \nthis policy because it caused so many complaints, but you should check \nbefore you buy. This was an incredibly stupid RAID controller design.\nDell tech support doesn't know what they're talking about when it \ncomes to RAID controllers and serious server support. You're better off\n with a white-box solution, where you can buy the exact parts \nrecommended in this group and get technical advice from people who know \nwhat they're talking about. Dell basically doesn't understand Postgres.\nThey boast excellent on-site service, but for the price of their \ncomputers and their service contract, you can buy two servers from a \nwhite-box vendor. Our white-box servers have been just as reliable as \nthe Dell servers -- no failures.\nI'm sure someone in Europe can recommend a good vendor for you.Craig James \n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 1 Mar 2013 07:28:30 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "pls choice PCI-E Flash for written heavy app\n\nWales\n\n在 2013-3-1,下午8:43,Niels Kristian Schjødt <[email protected]> 写道:\n\n> Hi, I'm going to setup a new server for my postgresql database, and I am considering one of these: http://www.hetzner.de/hosting/produkte_rootserver/poweredge-r720 with four SAS drives in a RAID 10 array. Has any of you any particular comments/pitfalls/etc. to mention on the setup? My application is very write heavy.\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 2 Mar 2013 02:05:00 +0900",
"msg_from": "Wales Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "Thanks both of you for your input.\n\nEarlier I have been discussing my extremely high IO wait with you here on the mailing list, and have tried a lot of tweaks both on postgresql config, wal directly location and kernel tweaks, but unfortunately my problem persists, and I think I'm eventually down to just bad hardware (currently two 7200rpm disks in a software raid 1). So changing to 4 15000rpm SAS disks in a raid 10 is probably going to change a lot - don't you think? However, we are running a lot of background processing 300 connections to db sometimes. So my question is, should I also get something like pgpool2 setup at the same time? Is it, from your experience, likely to increase my throughput a lot more, if I had a connection pool of eg. 20 connections, instead of 300 concurrent ones directly?\n\nDen 01/03/2013 kl. 16.28 skrev Craig James <[email protected]>:\n\n> On Fri, Mar 1, 2013 at 3:43 AM, Niels Kristian Schjødt <[email protected]> wrote:\n> Hi, I'm going to setup a new server for my postgresql database, and I am considering one of these: http://www.hetzner.de/hosting/produkte_rootserver/poweredge-r720 with four SAS drives in a RAID 10 array. Has any of you any particular comments/pitfalls/etc. to mention on the setup? My application is very write heavy.\n> \n> I can only tell you our experience with Dell from several years ago. We bought two Dell servers similar (somewhat larger) than the model you're looking at. We'll never buy from them again.\n> \n> Advantages: They work. They haven't failed.\n> \n> Disadvantages: \n> \n> Performance sucks. Dell costs far more than \"white box\" servers we buy from a \"white box\" supplier (ASA Computers). ASA gives us roughly double the performance for the same price. We can buy exactly what we want from ASA.\n> \n> Dell did a disk-drive \"lock in.\" The RAID controller won't spin up a non-Dell disk. They wanted roughly four times the price for their disks compared to buying the exact same disks on Amazon. If a disk went out today, it would probably cost even more because that model is obsolete (luckily, we bought a couple spares). I think they abandoned this policy because it caused so many complaints, but you should check before you buy. This was an incredibly stupid RAID controller design.\n> \n> Dell tech support doesn't know what they're talking about when it comes to RAID controllers and serious server support. You're better off with a white-box solution, where you can buy the exact parts recommended in this group and get technical advice from people who know what they're talking about. Dell basically doesn't understand Postgres.\n> \n> They boast excellent on-site service, but for the price of their computers and their service contract, you can buy two servers from a white-box vendor. Our white-box servers have been just as reliable as the Dell servers -- no failures.\n> \n> I'm sure someone in Europe can recommend a good vendor for you.\n> \n> Craig James\n> \n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\nThanks both of you for your input.Earlier I have been discussing my extremely high IO wait with you here on the mailing list, and have tried a lot of tweaks both on postgresql config, wal directly location and kernel tweaks, but unfortunately my problem persists, and I think I'm eventually down to just bad hardware (currently two 7200rpm disks in a software raid 1). So changing to 4 15000rpm SAS disks in a raid 10 is probably going to change a lot - don't you think? However, we are running a lot of background processing 300 connections to db sometimes. So my question is, should I also get something like pgpool2 setup at the same time? Is it, from your experience, likely to increase my throughput a lot more, if I had a connection pool of eg. 20 connections, instead of 300 concurrent ones directly?Den 01/03/2013 kl. 16.28 skrev Craig James <[email protected]>:On Fri, Mar 1, 2013 at 3:43 AM, Niels Kristian Schjødt <[email protected]> wrote:\nHi, I'm going to setup a new server for my postgresql database, and I am considering one of these: http://www.hetzner.de/hosting/produkte_rootserver/poweredge-r720 with four SAS drives in a RAID 10 array. Has any of you any particular comments/pitfalls/etc. to mention on the setup? My application is very write heavy.\nI can only tell you our experience with Dell from several years ago. We\n bought two Dell servers similar (somewhat larger) than the model you're\n looking at. We'll never buy from them again.\nAdvantages: They work. They haven't failed.Disadvantages: Performance\n sucks. Dell costs far more than \"white box\" servers we buy from a \n\"white box\" supplier (ASA Computers). ASA gives us roughly double the \nperformance for the same price. We can buy exactly what we want from \nASA.\nDell did a disk-drive \"lock in.\" The RAID controller won't spin up a\n non-Dell disk. They wanted roughly four times the price for their \ndisks compared to buying the exact same disks on Amazon. If a disk went\n out today, it would probably cost even more because that model is \nobsolete (luckily, we bought a couple spares). I think they abandoned \nthis policy because it caused so many complaints, but you should check \nbefore you buy. This was an incredibly stupid RAID controller design.\nDell tech support doesn't know what they're talking about when it \ncomes to RAID controllers and serious server support. You're better off\n with a white-box solution, where you can buy the exact parts \nrecommended in this group and get technical advice from people who know \nwhat they're talking about. Dell basically doesn't understand Postgres.\nThey boast excellent on-site service, but for the price of their \ncomputers and their service contract, you can buy two servers from a \nwhite-box vendor. Our white-box servers have been just as reliable as \nthe Dell servers -- no failures.\nI'm sure someone in Europe can recommend a good vendor for you.Craig James \n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 4 Mar 2013 12:20:49 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "Niels Kristian Schjødt <[email protected]> wrote:\n\n> So my question is, should I also get something like pgpool2 setup\n> at the same time? Is it, from your experience, likely to increase\n> my throughput a lot more, if I had a connection pool of eg. 20\n> connections, instead of 300 concurrent ones directly?\n\nIn my experience, it can make a big difference. If you are just\nusing the pooler for this reason, and don't need any of the other\nfeatures of pgpool, I suggest pgbouncer. It is a simpler, more\nlightweight tool.\n\n-- \nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Mar 2013 08:34:04 -0800 (PST)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On Tue, Mar 5, 2013 at 9:34 AM, Kevin Grittner <[email protected]> wrote:\n> Niels Kristian Schjødt <[email protected]> wrote:\n>\n>> So my question is, should I also get something like pgpool2 setup\n>> at the same time? Is it, from your experience, likely to increase\n>> my throughput a lot more, if I had a connection pool of eg. 20\n>> connections, instead of 300 concurrent ones directly?\n>\n> In my experience, it can make a big difference. If you are just\n> using the pooler for this reason, and don't need any of the other\n> features of pgpool, I suggest pgbouncer. It is a simpler, more\n> lightweight tool.\n\nI second the pgbouncer rec.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Mar 2013 10:10:21 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "Thanks, that was actually what I just ended up doing yesterday. Any suggestion how to tune pgbouncer?\n\nBTW, I have just bumped into an issue that caused me to disable pgbouncer again actually. My web application is querying the database with a per request based SEARCH_PATH. This is because I use schemas to provide country based separation of my data (e.g. english, german, danish data in different schemas). I have pgbouncer setup to have a transactional behavior (pool_mode = transaction) - however some of my colleagues complained that it sometimes didn't return data from the right schema set in the SEARCH_PATH - you wouldn't by chance have any idea what is going wrong wouldn't you?\n\n#################### pgbouncer.ini\n[databases]\nproduction =\n\n[pgbouncer]\n\nlogfile = /var/log/pgbouncer/pgbouncer.log\npidfile = /var/run/pgbouncer/pgbouncer.pid\nlisten_addr = localhost\nlisten_port = 6432\nunix_socket_dir = /var/run/postgresql\nauth_type = md5\nauth_file = /etc/pgbouncer/userlist.txt\nadmin_users = postgres\npool_mode = transaction\nserver_reset_query = DISCARD ALL\nmax_client_conn = 500\ndefault_pool_size = 20\nreserve_pool_size = 5\nreserve_pool_timeout = 10\n#####################\n\n\nDen 05/03/2013 kl. 17.34 skrev Kevin Grittner <[email protected]>:\n\n> Niels Kristian Schjødt <[email protected]> wrote:\n> \n>> So my question is, should I also get something like pgpool2 setup\n>> at the same time? Is it, from your experience, likely to increase\n>> my throughput a lot more, if I had a connection pool of eg. 20\n>> connections, instead of 300 concurrent ones directly?\n> \n> In my experience, it can make a big difference. If you are just\n> using the pooler for this reason, and don't need any of the other\n> features of pgpool, I suggest pgbouncer. It is a simpler, more\n> lightweight tool.\n> \n> -- \n> Kevin Grittner\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Mar 2013 18:11:48 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "Set it to use session. I had a similar issue having moved one of the components of our app to use transactions, which introduced an undesired behavior.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Niels Kristian Schjødt\nSent: Tuesday, March 05, 2013 10:12 AM\nTo: Kevin Grittner\nCc: Craig James; [email protected]\nSubject: Re: [PERFORM] New server setup\n\nThanks, that was actually what I just ended up doing yesterday. Any suggestion how to tune pgbouncer?\n\nBTW, I have just bumped into an issue that caused me to disable pgbouncer again actually. My web application is querying the database with a per request based SEARCH_PATH. This is because I use schemas to provide country based separation of my data (e.g. english, german, danish data in different schemas). I have pgbouncer setup to have a transactional behavior (pool_mode = transaction) - however some of my colleagues complained that it sometimes didn't return data from the right schema set in the SEARCH_PATH - you wouldn't by chance have any idea what is going wrong wouldn't you?\n\n#################### pgbouncer.ini\n[databases]\nproduction =\n\n[pgbouncer]\n\nlogfile = /var/log/pgbouncer/pgbouncer.log pidfile = /var/run/pgbouncer/pgbouncer.pid listen_addr = localhost listen_port = 6432 unix_socket_dir = /var/run/postgresql auth_type = md5 auth_file = /etc/pgbouncer/userlist.txt admin_users = postgres pool_mode = transaction server_reset_query = DISCARD ALL max_client_conn = 500 default_pool_size = 20 reserve_pool_size = 5 reserve_pool_timeout = 10 #####################\n\n\nDen 05/03/2013 kl. 17.34 skrev Kevin Grittner <[email protected]>:\n\n> Niels Kristian Schjødt <[email protected]> wrote:\n> \n>> So my question is, should I also get something like pgpool2 setup at \n>> the same time? Is it, from your experience, likely to increase my \n>> throughput a lot more, if I had a connection pool of eg. 20 \n>> connections, instead of 300 concurrent ones directly?\n> \n> In my experience, it can make a big difference. If you are just using \n> the pooler for this reason, and don't need any of the other features \n> of pgpool, I suggest pgbouncer. It is a simpler, more lightweight \n> tool.\n> \n> --\n> Kevin Grittner\n> EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL \n> Company\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Mar 2013 11:03:44 -0700",
"msg_from": "\"Benjamin Krajmalnik\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "Okay, thanks - but hey - if I put it at session pooling, then it says in the documentation: \"default_pool_size: In session pooling it needs to be the number of max clients you want to handle at any moment\". So as I understand it, is it true that I then have to set default_pool_size to 300 if I have up to 300 client connections? And then what would the pooler then help on my performance - would that just be exactly like having the 300 clients connect directly to the database???\n\n-NK\n\n\nDen 05/03/2013 kl. 19.03 skrev \"Benjamin Krajmalnik\" <[email protected]>:\n\n> \n\n\nOkay, thanks - but hey - if I put it at session pooling, then it says in the documentation: \"default_pool_size: In session pooling it needs to be the number of max clients you want to handle at any moment\". So as I understand it, is it true that I then have to set default_pool_size to 300 if I have up to 300 client connections? And then what would the pooler then help on my performance - would that just be exactly like having the 300 clients connect directly to the database???-NKDen 05/03/2013 kl. 19.03 skrev \"Benjamin Krajmalnik\" <[email protected]>:",
"msg_date": "Tue, 5 Mar 2013 19:27:32 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On Tue, Mar 5, 2013 at 10:27 AM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n> Okay, thanks - but hey - if I put it at session pooling, then it says in\n> the documentation: \"default_pool_size: In session pooling it needs to be\n> the number of max clients you want to handle at any moment\". So as I\n> understand it, is it true that I then have to set default_pool_size to 300\n> if I have up to 300 client connections?\n>\n\nIf those 300 client connections are all long-lived, then yes you need that\nmany in the pool. If they are short-lived connections, then you can have a\nlot less as any ones over the default_pool_size will simply block until an\nexisting connection is closed and can be re-assigned--which won't take long\nif they are short-lived connections.\n\n\nAnd then what would the pooler then help on my performance - would that\n> just be exactly like having the 300 clients connect directly to the\n> database???\n>\n\nIt would probably be even worse than having 300 clients connected\ndirectly. There would be no point in using a pooler under those conditions.\n\n\nCheers,\n\nJeff\n\nOn Tue, Mar 5, 2013 at 10:27 AM, Niels Kristian Schjødt <[email protected]> wrote:\nOkay, thanks - but hey - if I put it at session pooling, then it says in the documentation: \"default_pool_size: In session pooling it needs to be the number of max clients you want to handle at any moment\". So as I understand it, is it true that I then have to set default_pool_size to 300 if I have up to 300 client connections?\nIf those 300 client connections are all long-lived, then yes you need that many in the pool. If they are short-lived connections, then you can have a lot less as any ones over the default_pool_size will simply block until an existing connection is closed and can be re-assigned--which won't take long if they are short-lived connections.\n And then what would the pooler then help on my performance - would that just be exactly like having the 300 clients connect directly to the database???\nIt would probably be even worse than having 300 clients connected directly. There would be no point in using a pooler under those conditions. Cheers,Jeff",
"msg_date": "Tue, 5 Mar 2013 13:59:14 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "In my recent experience PgPool2 performs pretty badly as a pooler. I'd\navoid it if possible, unless you depend on other features.\nIt simply doesn't scale.\n\n\n\nOn 5 March 2013 21:59, Jeff Janes <[email protected]> wrote:\n\n> On Tue, Mar 5, 2013 at 10:27 AM, Niels Kristian Schjødt <\n> [email protected]> wrote:\n>\n>> Okay, thanks - but hey - if I put it at session pooling, then it says in\n>> the documentation: \"default_pool_size: In session pooling it needs to be\n>> the number of max clients you want to handle at any moment\". So as I\n>> understand it, is it true that I then have to set default_pool_size to 300\n>> if I have up to 300 client connections?\n>>\n>\n> If those 300 client connections are all long-lived, then yes you need that\n> many in the pool. If they are short-lived connections, then you can have a\n> lot less as any ones over the default_pool_size will simply block until an\n> existing connection is closed and can be re-assigned--which won't take long\n> if they are short-lived connections.\n>\n>\n> And then what would the pooler then help on my performance - would that\n>> just be exactly like having the 300 clients connect directly to the\n>> database???\n>>\n>\n> It would probably be even worse than having 300 clients connected\n> directly. There would be no point in using a pooler under those conditions.\n>\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n\n-- \nGJ\n\nIn my recent experience PgPool2 performs pretty badly as a pooler. I'd avoid it if possible, unless you depend on other features. It simply doesn't scale. \nOn 5 March 2013 21:59, Jeff Janes <[email protected]> wrote:\nOn Tue, Mar 5, 2013 at 10:27 AM, Niels Kristian Schjødt <[email protected]> wrote:\n\nOkay, thanks - but hey - if I put it at session pooling, then it says in the documentation: \"default_pool_size: In session pooling it needs to be the number of max clients you want to handle at any moment\". So as I understand it, is it true that I then have to set default_pool_size to 300 if I have up to 300 client connections?\nIf those 300 client connections are all long-lived, then yes you need that many in the pool. If they are short-lived connections, then you can have a lot less as any ones over the default_pool_size will simply block until an existing connection is closed and can be re-assigned--which won't take long if they are short-lived connections.\n And then what would the pooler then help on my performance - would that just be exactly like having the 300 clients connect directly to the database???\nIt would probably be even worse than having 300 clients connected directly. There would be no point in using a pooler under those conditions. Cheers,Jeff\n-- GJ",
"msg_date": "Sat, 9 Mar 2013 17:53:19 +0000",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 3/1/13 6:43 AM, Niels Kristian Schj�dt wrote:\n> Hi, I'm going to setup a new server for my postgresql database, and I am considering one of these: http://www.hetzner.de/hosting/produkte_rootserver/poweredge-r720 with four SAS drives in a RAID 10 array. Has any of you any particular comments/pitfalls/etc. to mention on the setup? My application is very write heavy.\n\nThe Dell PERC H710 (actually a LSI controller) works fine for \nwrite-heavy workloads on a RAID 10, as long as you order it with a \nbattery backup unit module. Someone must install the controller \nmanagement utility and do three things however:\n\n1) Make sure the battery-backup unit is working.\n\n2) Configure the controller so that the *disk* write cache is off.\n\n3) Set the controller cache to \"write-back when battery is available\". \nThat will use the cache when it is safe to do so, and if not it will \nbypass it. That will make the server slow down if the battery fails, \nbut it won't ever become unsafe at writing.\n\nSee http://wiki.postgresql.org/wiki/Reliable_Writes for more information \nabout this topic. If you'd like some consulting help with making sure \nthe server is working safely and as fast as it should be, 2ndQuadrant \ndoes offer a hardware benchmarking service to do that sort of thing: \nhttp://www.2ndquadrant.com/en/hardware-benchmarking/ I think we're even \ngenerating those reports in German now.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 10 Mar 2013 11:58:47 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 10 March 2013 15:58, Greg Smith <[email protected]> wrote:\n\n> On 3/1/13 6:43 AM, Niels Kristian Schjødt wrote:\n>\n>> Hi, I'm going to setup a new server for my postgresql database, and I am\n>> considering one of these: http://www.hetzner.de/hosting/**\n>> produkte_rootserver/poweredge-**r720<http://www.hetzner.de/hosting/produkte_rootserver/poweredge-r720>with four SAS drives in a RAID 10 array. Has any of you any particular\n>> comments/pitfalls/etc. to mention on the setup? My application is very\n>> write heavy.\n>>\n>\n> The Dell PERC H710 (actually a LSI controller) works fine for write-heavy\n> workloads on a RAID 10, as long as you order it with a battery backup unit\n> module. Someone must install the controller management utility and do\n> three things however:\n>\n> We're going to go with either HP or IBM (customer's preference, etc).\n\n\n\n> 1) Make sure the battery-backup unit is working.\n>\n> 2) Configure the controller so that the *disk* write cache is off.\n>\n> 3) Set the controller cache to \"write-back when battery is available\".\n> That will use the cache when it is safe to do so, and if not it will bypass\n> it. That will make the server slow down if the battery fails, but it won't\n> ever become unsafe at writing.\n>\n> See http://wiki.postgresql.org/**wiki/Reliable_Writes<http://wiki.postgresql.org/wiki/Reliable_Writes>for more information about this topic. If you'd like some consulting help\n> with making sure the server is working safely and as fast as it should be,\n> 2ndQuadrant does offer a hardware benchmarking service to do that sort of\n> thing: http://www.2ndquadrant.com/en/**hardware-benchmarking/<http://www.2ndquadrant.com/en/hardware-benchmarking/> I think we're even generating those reports in German now.\n\n\n\nThanks Greg. I will follow advice there, and also the one in your book. I\ndo always make sure they order battery backed cache (or flash based, which\nseems to be what people use these days).\n\nI think subject of using external help with setting things up did came up,\nbut more around connection pooling subject then hardware itself (shortly,\npgpool2 is crap, we will go with dns based solution and apps connection\ndirectly to nodes).\nI will let my clients (doing this on a contract) know that there's an\noption to get you guys to help us. Mind you, this database is rather small\nin grand scheme of things (30-40GB). Just possibly a lot of occasional\nwrites.\n\nWe wouldn't need German. But Proper English (i.e. british english) would\nalways be nice ;)\n\n\nWhilst on the hardware subject, someone mentioned throwing ssd into the\nmix. I.e. combining spinning HDs with SSD, apparently some raid cards can\nuse small-ish (80GB+) SSDs as external caches. Any experiences with that ?\n\n\nThanks !\n\n\n\n\n-- \nGJ\n\nOn 10 March 2013 15:58, Greg Smith <[email protected]> wrote:\nOn 3/1/13 6:43 AM, Niels Kristian Schjødt wrote:\n\nHi, I'm going to setup a new server for my postgresql database, and I am considering one of these: http://www.hetzner.de/hosting/produkte_rootserver/poweredge-r720 with four SAS drives in a RAID 10 array. Has any of you any particular comments/pitfalls/etc. to mention on the setup? My application is very write heavy.\n\n\nThe Dell PERC H710 (actually a LSI controller) works fine for write-heavy workloads on a RAID 10, as long as you order it with a battery backup unit module. Someone must install the controller management utility and do three things however:\nWe're going to go with either HP or IBM (customer's preference, etc). \n\n1) Make sure the battery-backup unit is working.\n\n2) Configure the controller so that the *disk* write cache is off.\n\n3) Set the controller cache to \"write-back when battery is available\". That will use the cache when it is safe to do so, and if not it will bypass it. That will make the server slow down if the battery fails, but it won't ever become unsafe at writing.\n\nSee http://wiki.postgresql.org/wiki/Reliable_Writes for more information about this topic. If you'd like some consulting help with making sure the server is working safely and as fast as it should be, 2ndQuadrant does offer a hardware benchmarking service to do that sort of thing: http://www.2ndquadrant.com/en/hardware-benchmarking/ I think we're even generating those reports in German now.\nThanks Greg. I will follow advice there, and also the one in your book. I do always make sure they order battery backed cache (or flash based, which seems to be what people use these days). \nI think subject of using external help with setting things up did came up, but more around connection pooling subject then hardware itself (shortly, pgpool2 is crap, we will go with dns based solution and apps connection directly to nodes). \nI will let my clients (doing this on a contract) know that there's an option to get you guys to help us. Mind you, this database is rather small in grand scheme of things (30-40GB). Just possibly a lot of occasional writes.\nWe wouldn't need German. But Proper English (i.e. british english) would always be nice ;)Whilst on the hardware subject, someone mentioned throwing ssd into the mix. I.e. combining spinning HDs with SSD, apparently some raid cards can use small-ish (80GB+) SSDs as external caches. Any experiences with that ?\nThanks ! -- GJ",
"msg_date": "Tue, 12 Mar 2013 21:41:08 +0000",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 12/03/2013 21:41, Gregg Jaskiewicz wrote:\n>\n> Whilst on the hardware subject, someone mentioned throwing ssd into \n> the mix. I.e. combining spinning HDs with SSD, apparently some raid \n> cards can use small-ish (80GB+) SSDs as external caches. Any \n> experiences with that ?\n>\nThe new LSI/Dell cards do this (eg H710 as mentioned in an earlier \npost). It is easy to set up and supported it seems on all versions of \ndells cards even if the docs say it isn't. Works well with the limited \ntesting I did, switched to pretty much all SSDs drives in my current setup\n\nThese cards also supposedly support enhanced performance with just SSDs \n(CTIO) by playing with the cache settings, but to be honest I haven't \nnoticed any difference and I'm not entirely sure it is enabled as there \nis no indication that CTIO is actually enabled and working.\n\nJohn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 15:33:37 +0000",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "\nOn 13 Mar 2013, at 15:33, John Lister <[email protected]> wrote:\n\n> On 12/03/2013 21:41, Gregg Jaskiewicz wrote:\n>> \n>> Whilst on the hardware subject, someone mentioned throwing ssd into the mix. I.e. combining spinning HDs with SSD, apparently some raid cards can use small-ish (80GB+) SSDs as external caches. Any experiences with that ?\n>> \n> The new LSI/Dell cards do this (eg H710 as mentioned in an earlier post). It is easy to set up and supported it seems on all versions of dells cards even if the docs say it isn't. Works well with the limited testing I did, switched to pretty much all SSDs drives in my current setup\n> \n> These cards also supposedly support enhanced performance with just SSDs (CTIO) by playing with the cache settings, but to be honest I haven't noticed any difference and I'm not entirely sure it is enabled as there is no indication that CTIO is actually enabled and working.\n> \nSSDs have much shorter life then spinning drives, so what do you do when one inevitably fails in your system ?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 15:50:51 +0000",
"msg_from": "Greg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 13/03/2013 15:50, Greg Jaskiewicz wrote:\n> SSDs have much shorter life then spinning drives, so what do you do when one inevitably fails in your system ?\nDefine much shorter? I accept they have a limited no of writes, but that \ndepends on load. You can actively monitor the drives \"health\" level in \nterms of wear using smart and it is relatively straightforward to \ncalculate an estimate of life based on average use and for me that works \nout at about in excess of 5 years. Experience tells me that spinning \ndrives have a habit of failing in that time frame as well :( and in 5 \nyears I'll be replacing the server probably.\n\nI also overprovisioned the drives by about an extra 13% giving me 20% \nspare capacity when adding in the 7% manufacturer spare space - given \nthis currently my drives have written about 4TB of data each and show 0% \nwear, this is for 160GB drives. I actively monitor the wear level and \nplan to replace the drives when they get low. For a comparison of write \nlevels see \nhttp://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm, \nit shows for the 320series that it reported to have hit the wear limit \nat 190TB (for a drive 1/4 the size of mine) but actually managed nearer \n700TB before the drive failed.\n\nI've mixed 2 different manufacturers in my raid 10 pairs to mitigate \nagainst both pairs failing at the same time either due to a firmware bug \nor being full In addition when I was setting the box up I did some \nperformance testing against the drives but with using different \ncombinations for each test - the aim here is to pre-load each drive \ndifferently to prevent them failing when full simultaneously.\n\nIf you do go for raid 10 make sure to have a power fail endurance, ie \ncapacitor or battery on the drive.\n\nJohn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 16:15:31 +0000",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 03/13/2013 09:15 AM, John Lister wrote:\n> On 13/03/2013 15:50, Greg Jaskiewicz wrote:\n>> SSDs have much shorter life then spinning drives, so what do you do \n>> when one inevitably fails in your system ?\n> Define much shorter? I accept they have a limited no of writes, but \n> that depends on load. You can actively monitor the drives \"health\" \n> level...\n\nWhat concerns me more than wear is this:\n\nInfoWorld Article:\nhttp://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715\n\nReferenced research paper:\nhttps://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault\n\nKind of messes with the \"D\" in ACID.\n\nCheers,\nSteve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 12:23:18 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 3/13/2013 2:23 PM, Steve Crawford wrote:\n> On 03/13/2013 09:15 AM, John Lister wrote:\n>> On 13/03/2013 15:50, Greg Jaskiewicz wrote:\n>>> SSDs have much shorter life then spinning drives, so what do you do\n>>> when one inevitably fails in your system ?\n>> Define much shorter? I accept they have a limited no of writes, but\n>> that depends on load. You can actively monitor the drives \"health\"\n>> level...\n>\n> What concerns me more than wear is this:\n>\n> InfoWorld Article:\n> http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715\n>\n>\n> Referenced research paper:\n> https://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault\n>\n>\n> Kind of messes with the \"D\" in ACID.\n>\n> Cheers,\n> Steve\n\nOne potential way around this is to run ZFS as the underlying filesystem\nand use the SSDs as cache drives. If they lose data due to a power\nproblem it is non-destructive.\n\nShort of that you cannot use a SSD on a machine where silent corruption\nis unacceptable UNLESS you know it has a supercap or similar IN THE DISK\nthat guarantees that on-drive cache can be flushed in the event of a\npower failure. A battery-backed controller cache DOES NOTHING to\nalleviate this risk. If you violate this rule and the power goes off\nyou must EXPECT silent and possibly-catastrophic data corruption.\n\nOnly a few (and they're expensive!) SSD drives have said protection. If\nyours does not the only SAFE option is as I described up above using\nthem as ZFS cache devices.\n\n-- \n-- Karl Denninger\n/The Market Ticker ®/ <http://market-ticker.org>\nCuda Systems LLC\n\n\n\n\n\n\n\nOn 3/13/2013 2:23 PM, Steve Crawford\n wrote:\n\nOn 03/13/2013 09:15 AM, John Lister wrote:\n \nOn 13/03/2013 15:50, Greg Jaskiewicz\n wrote:\n \nSSDs have much shorter life then\n spinning drives, so what do you do when one inevitably fails\n in your system ?\n \n\n Define much shorter? I accept they have a limited no of writes,\n but that depends on load. You can actively monitor the drives\n \"health\" level...\n \n\n\n What concerns me more than wear is this:\n \n\n InfoWorld Article:\n \nhttp://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715\n\n\n Referenced research paper:\n \nhttps://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault\n\n\n Kind of messes with the \"D\" in ACID.\n \n\n Cheers,\n \n Steve\n \n\n\n One potential way around this is to run ZFS as the underlying\n filesystem and use the SSDs as cache drives. If they lose data due\n to a power problem it is non-destructive.\n\n Short of that you cannot use a SSD on a machine where silent\n corruption is unacceptable UNLESS you know it has a supercap or\n similar IN THE DISK that guarantees that on-drive cache can be\n flushed in the event of a power failure. A battery-backed\n controller cache DOES NOTHING to alleviate this risk. If you\n violate this rule and the power goes off you must EXPECT silent and\n possibly-catastrophic data corruption.\n\n Only a few (and they're expensive!) SSD drives have said\n protection. If yours does not the only SAFE option is as I\n described up above using them as ZFS cache devices.\n\n-- \n -- Karl Denninger\nThe Market Ticker ®\n Cuda Systems LLC",
"msg_date": "Wed, 13 Mar 2013 14:38:27 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On Mar 13, 2013, at 3:23 PM, Steve Crawford wrote:\n\n> On 03/13/2013 09:15 AM, John Lister wrote:\n>> On 13/03/2013 15:50, Greg Jaskiewicz wrote:\n>>> SSDs have much shorter life then spinning drives, so what do you do when one inevitably fails in your system ?\n>> Define much shorter? I accept they have a limited no of writes, but that depends on load. You can actively monitor the drives \"health\" level...\n> \n> What concerns me more than wear is this:\n> \n> InfoWorld Article:\n> http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715\n> \n> Referenced research paper:\n> https://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault\n> \n> Kind of messes with the \"D\" in ACID.\n\nHave a look at this:\n\nhttp://blog.2ndquadrant.com/intel_ssd_now_off_the_sherr_sh/\n\nI'm not sure what other ssds offer this, but Intel's newest entry will, and it's attractively priced.\n\nAnother way we leverage SSDs that can be more reliable in the face of total SSD meltdown is to use them as ZFS Intent Log caches. All the sync writes get handled on the SSDs. We deploy them as mirrored vdevs, so if one fails, we're OK. If both fail, we're really slow until someone can replace them. On modest hardware, I was able to get about 20K TPS out of pgbench with the SSDs configured as ZIL and 4 10K raptors as the spinny disks.\n\nIn either case, the amount of money you'd have to spend on the two-dozen or so SAS drives (and the controllers, enclosure, etc.) that would equal a few pairs of SSDs in random IO performance is non-trivial, even if you plan on proactively retiring your SSDs every year.\n\nJust another take on the issue..\n\nCharles\n\n> \n> Cheers,\n> Steve\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 15:47:14 -0400",
"msg_from": "CSS <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 13/03/2013 19:23, Steve Crawford wrote:\n> On 03/13/2013 09:15 AM, John Lister wrote:\n>> On 13/03/2013 15:50, Greg Jaskiewicz wrote:\n>>> SSDs have much shorter life then spinning drives, so what do you do \n>>> when one inevitably fails in your system ?\n>> Define much shorter? I accept they have a limited no of writes, but \n>> that depends on load. You can actively monitor the drives \"health\" \n>> level...\n>\n> What concerns me more than wear is this:\n>\n> InfoWorld Article:\n> http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715 \n>\nWhen I read this they didn't name the drives that failed - or those that \npassed. But I'm assuming the failed ones are standard consumer SSDS, but \n2 good ones were either enterprise of had caps. The reason I say this, \nis that yes SSD drives by the nature of their operation cache/store \ninformation in ram while they write it to the flash and to handle the \nmappings, etc of real to virtual sectors and if they loose power it is \nthis that is lost, causing at best corruption if not complete loss of \nthe drive. Enterprise drives (and some consumer, such as the 320s) have \neither capacitors or battery backup to allows the drive to safely \nshutdown. There have been various reports both on this list and \nelsewhere showing that these drives successfully survive repeated power \nfailures.\n\nA bigger concern is the state of the firmware in these drives which \nuntil recently was more likely to trash your drive - fortunately things \nseems to becoming more stable with age now.\n\nJohn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 20:05:52 +0000",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 3/13/2013 1:23 PM, Steve Crawford wrote:\n>\n> What concerns me more than wear is this:\n>\n> InfoWorld Article:\n> http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715 \n>\n>\n> Referenced research paper:\n> https://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault \n>\n>\n> Kind of messes with the \"D\" in ACID.\n\nIt is somewhat surprising to discover that many SSD products are not \ndurable under sudden power loss (what where they thinking!?, and ...why \ndoesn't anyone care??).\n\nHowever, there is a set of SSD types known to be designed to address \npower loss events that have been tested by contributors to this list.\nUse only those devices and you won't see this problem. SSDs do have a \nwear-out mechanism but wear can be monitored and devices replaced in \nadvance of failure. In practice longevity is such that most machines \nwill be in the dumpster long before the SSD wears out. We've had \nmachines running with several hundred wps constantly for 18 months using \nIntel 710 drives and the wear level SMART value is still zero.\n\nIn addition, like any electronics module (CPU, memory, NIC), an SSD can \nfail so you do need to arrange for valuable data to be replicated.\nAs with old school disk drives, firmware bugs are a concern so you might \nwant to consider what would happen if all the drives of a particular \ntype all decided to quit working at the same second in time (I've only \nseen this happen myself with magnetic drives, but in theory it could \nhappen with SSD).\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 14:16:16 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 14/03/13 09:16, David Boreham wrote:\n> On 3/13/2013 1:23 PM, Steve Crawford wrote:\n>>\n>> What concerns me more than wear is this:\n>>\n>> InfoWorld Article:\n>> http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715\n>>\n>>\n>> Referenced research paper:\n>> https://www.usenix.org/conference/fast13/understanding-robustness-ssds-under-power-fault\n>>\n>>\n>> Kind of messes with the \"D\" in ACID.\n>\n> It is somewhat surprising to discover that many SSD products are not\n> durable under sudden power loss (what where they thinking!?, and ...why\n> doesn't anyone care??).\n>\n> However, there is a set of SSD types known to be designed to address\n> power loss events that have been tested by contributors to this list.\n> Use only those devices and you won't see this problem. SSDs do have a\n> wear-out mechanism but wear can be monitored and devices replaced in\n> advance of failure. In practice longevity is such that most machines\n> will be in the dumpster long before the SSD wears out. We've had\n> machines running with several hundred wps constantly for 18 months using\n> Intel 710 drives and the wear level SMART value is still zero.\n>\n> In addition, like any electronics module (CPU, memory, NIC), an SSD can\n> fail so you do need to arrange for valuable data to be replicated.\n> As with old school disk drives, firmware bugs are a concern so you might\n> want to consider what would happen if all the drives of a particular\n> type all decided to quit working at the same second in time (I've only\n> seen this happen myself with magnetic drives, but in theory it could\n> happen with SSD).\n>\n>\n\nJust going through this now with a vendor. They initially assured us \nthat the drives had \"end to end protection\" so we did not need to worry. \nI had to post stripdown pictures from Intel's s3700, showing obvious \ncapacitors attached to the board before I was taken seriously and \nactually meaningful specifications were revealed. So now I'm demanding \nto know:\n\n- chipset (and version)\n- original manufacturer (for re-badged ones)\n- power off protection *explicitly* mentioned\n- show me the circuit board (and where are the capacitors)\n\nSeems like you gotta push 'em!\n\nCheers\n\nMark\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Mar 2013 16:29:13 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 3/13/2013 9:29 PM, Mark Kirkwood wrote:\n> Just going through this now with a vendor. They initially assured us \n> that the drives had \"end to end protection\" so we did not need to \n> worry. I had to post stripdown pictures from Intel's s3700, showing \n> obvious capacitors attached to the board before I was taken seriously \n> and actually meaningful specifications were revealed. So now I'm \n> demanding to know:\n>\n> - chipset (and version)\n> - original manufacturer (for re-badged ones)\n> - power off protection *explicitly* mentioned\n> - show me the circuit board (and where are the capacitors) \n\nIn addition to the above, I only use drives where I've seen compelling \nevidence that plug pull tests have been done and passed (e.g. done by \nsomeone on this list or in-house here). I also like to have a high \nlevel of confidence in the firmware development group. This results in a \nvery small set of acceptable products :(\n\n\n\n\n\n\n\n\n\nOn 3/13/2013 9:29 PM, Mark Kirkwood\n wrote:\n\nJust\n going through this now with a vendor. They initially assured us\n that the drives had \"end to end protection\" so we did not need to\n worry. I had to post stripdown pictures from Intel's s3700,\n showing obvious capacitors attached to the board before I was\n taken seriously and actually meaningful specifications were\n revealed. So now I'm demanding to know:\n \n\n - chipset (and version)\n \n - original manufacturer (for re-badged ones)\n \n - power off protection *explicitly* mentioned\n \n - show me the circuit board (and where are the capacitors)\n \n\n In addition to the above, I only use drives where I've seen\n compelling evidence that plug pull tests have been done and passed\n (e.g. done by someone on this list or in-house here). I also like\n to have a high level of confidence in the firmware development\n group. This results in a very small set of acceptable products :(",
"msg_date": "Wed, 13 Mar 2013 21:39:51 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On Tue, Mar 12, 2013 at 09:41:08PM +0000, Gregg Jaskiewicz wrote:\n> On 10 March 2013 15:58, Greg Smith <[email protected]> wrote:\n> \n> On 3/1/13 6:43 AM, Niels Kristian Schj�dt wrote:\n> \n> Hi, I'm going to setup a new server for my postgresql database, and I\n> am considering one of these: http://www.hetzner.de/hosting/\n> produkte_rootserver/poweredge-r720 with four SAS drives in a RAID 10\n> array. Has any of you any particular comments/pitfalls/etc. to mention\n> on the setup? My application is very write heavy.\n> \n> \n> The Dell PERC H710 (actually a LSI controller) works fine for write-heavy\n> workloads on a RAID 10, as long as you order it with a battery backup unit\n> module. �Someone must install the controller management utility and do\n> three things however:\n> \n> \n> We're going to go with either HP or IBM (customer's preference, etc).�\n> \n> �\n> \n> 1) Make sure the battery-backup unit is working.\n> \n> 2) Configure the controller so that the *disk* write cache is off.\n\nOnly use SSDs with a BBU cache, and don't set SSD caches to\nwrite-through because an SSD needs to cache the write to avoid wearing\nout the chips early, see:\n\n\thttp://momjian.us/main/blogs/pgblog/2012.html#August_3_2012\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Mar 2013 14:54:24 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 15/03/13 07:54, Bruce Momjian wrote:\n> Only use SSDs with a BBU cache, and don't set SSD caches to\n> write-through because an SSD needs to cache the write to avoid wearing\n> out the chips early, see:\n>\n> \thttp://momjian.us/main/blogs/pgblog/2012.html#August_3_2012\n>\n\nI not convinced about the need for BBU with SSD - you *can* use them \nwithout one, just need to make sure about suitable longevity and also \nthe presence of (proven) power off protection (as discussed previously). \nIt is worth noting that using unproven or SSD known to be lacking power \noff protection with a BBU will *not* save you from massive corruption \n(or device failure) upon unexpected power loss.\n\nAlso, in terms of performance, the faster PCIe SSD do about as well by \nthemselves as connected to a RAID card with BBU. In fact they will do \nbetter in some cases (the faster SSD can get close to the max IOPS many \nRAID cards can handle...so more than a couple of 'em plugged into one \ncard will be throttled by its limitations).\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Mar 2013 10:37:55 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 15/03/13 10:37, Mark Kirkwood wrote:\n>\n> Also, in terms of performance, the faster PCIe SSD do about as well by \n> themselves as connected to a RAID card with BBU.\n>\n\nSorry - I meant to say \"the faster **SAS** SSD do...\", since you can't \ncurrently plug PCIe SSD into RAID cards (confusingly, some of the PCIe \nguys actually have RAID card firmware on their boards...Intel 910 I think).\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Mar 2013 10:47:07 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On Fri, Mar 15, 2013 at 10:37:55AM +1300, Mark Kirkwood wrote:\n> On 15/03/13 07:54, Bruce Momjian wrote:\n> >Only use SSDs with a BBU cache, and don't set SSD caches to\n> >write-through because an SSD needs to cache the write to avoid wearing\n> >out the chips early, see:\n> >\n> >\thttp://momjian.us/main/blogs/pgblog/2012.html#August_3_2012\n> >\n> \n> I not convinced about the need for BBU with SSD - you *can* use them\n> without one, just need to make sure about suitable longevity and\n> also the presence of (proven) power off protection (as discussed\n> previously). It is worth noting that using unproven or SSD known to\n> be lacking power off protection with a BBU will *not* save you from\n> massive corruption (or device failure) upon unexpected power loss.\n\nI don't think any drive that corrupts on power-off is suitable for a\ndatabase, but for non-db uses, sure, I guess they are OK, though you\nhave to be pretty money-constrainted to like that tradeoff.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Mar 2013 18:34:49 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 15/03/13 11:34, Bruce Momjian wrote:\n>\n> I don't think any drive that corrupts on power-off is suitable for a\n> database, but for non-db uses, sure, I guess they are OK, though you\n> have to be pretty money-constrainted to like that tradeoff.\n>\n\nAgreed - really *all* SSD should have capacitor (or equivalent) power \noff protection...that fact that it's a feature present on only a handful \nof drives is...disappointing.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Mar 2013 11:53:37 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 3/14/2013 3:37 PM, Mark Kirkwood wrote:\n> I not convinced about the need for BBU with SSD - you *can* use them \n> without one, just need to make sure about suitable longevity and also \n> the presence of (proven) power off protection (as discussed \n> previously). It is worth noting that using unproven or SSD known to be \n> lacking power off protection with a BBU will *not* save you from \n> massive corruption (or device failure) upon unexpected power loss.\n\nI think it probably depends on the specifics of the deployment, but for \nus the fact that the BBU isn't required in order to achieve high write \ntps with SSDs is one of the key benefits -- the power, cooling and space \nsavings over even a few servers are significant. In our case we only \nhave one or two drives per server so no need for fancy drive string \narrangements.\n>\n> Also, in terms of performance, the faster PCIe SSD do about as well by \n> themselves as connected to a RAID card with BBU. In fact they will do \n> better in some cases (the faster SSD can get close to the max IOPS \n> many RAID cards can handle...so more than a couple of 'em plugged into \n> one card will be throttled by its limitations).\n\nYou might want to evaluate the performance you can achieve with a \nsingle-SSD (use several for capacity by all means) before considering a \nRAID card + SSD solution.\nAgain I bet it depends on the application but our experience with the \nolder Intel 710 series is that their performance out-runs the CPU, at \nleast under our PG workload.\n\n\n\n\n\n\n\n\nOn 3/14/2013 3:37 PM, Mark Kirkwood\n wrote:\n\nI\n not convinced about the need for BBU with SSD - you *can* use them without one, just\n need to make sure about suitable longevity and also the presence\n of (proven) power off protection (as discussed previously). It is\n worth noting that using unproven or SSD known to be lacking power\n off protection with a BBU will *not*\n save you from massive corruption (or device failure) upon\n unexpected power loss.\n \n\n\n I think it probably depends on the specifics of the deployment, but\n for us the fact that the BBU isn't required in order to achieve high\n write tps with SSDs is one of the key benefits -- the power, cooling\n and space savings over even a few servers are significant. In our\n case we only have one or two drives per server so no need for fancy\n drive string arrangements.\n\n\n Also, in terms of performance, the faster PCIe SSD do about as\n well by themselves as connected to a RAID card with BBU. In fact\n they will do better in some cases (the faster SSD can get close to\n the max IOPS many RAID cards can handle...so more than a couple of\n 'em plugged into one card will be throttled by its limitations).\n \n\n\n You might want to evaluate the performance you can achieve with a\n single-SSD (use several for capacity by all means) before\n considering a RAID card + SSD solution.\n Again I bet it depends on the application but our experience with\n the older Intel 710 series is that their performance out-runs the\n CPU, at least under our PG workload.",
"msg_date": "Thu, 14 Mar 2013 17:37:54 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": ">> I not convinced about the need for BBU with SSD - you *can* use them \n>> without one, just need to make sure about suitable longevity and also \n>> the presence of (proven) power off protection (as discussed \n>> previously). It is worth noting that using unproven or SSD known to be \n>> lacking power off protection with a BBU will *not* save you from \n>> massive corruption (or device failure) upon unexpected power loss.\n\n>I don't think any drive that corrupts on power-off is suitable for a database, but for non-db uses, sure, I guess they are OK, though you have to be pretty money->constrainted to like that tradeoff.\n\nWouldn't mission critical databases normally be configured in a high availability cluster - presumably with replicas running on different power sources?\n\nIf you lose power to a member of the cluster (or even the master), you would have new data coming in and stuff to do long before it could come back online - corrupted disk or not.\n\nI find it hard to imagine configuring something that is too critical to be able to be restored from periodic backup to NOT be in a (synchronous) cluster. I'm not sure all the fuss over whether an SSD might come back after a hard server failure is really about. You should architect the solution so you can lose the server and throw it away and never bring it back online again. Native streaming replication is fairly straightforward to configure. Asynchronous multimaster (albeit with some synchronization latency) is also fairly easy to configure using third party tools such as SymmetricDS.\n\nAgreed that adding a supercap doesn't sound like a hard thing for a hardware manufacturer to do, but I don't think it should be a necessarily be showstopper for being able to take advantage of some awesome I/O performance opportunities.\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Mar 2013 18:06:02 +0000",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On Fri, Mar 15, 2013 at 06:06:02PM +0000, Rick Otten wrote:\n> >I don't think any drive that corrupts on power-off is suitable for a\n> >database, but for non-db uses, sure, I guess they are OK, though you\n> >have to be pretty money->constrainted to like that tradeoff.\n>\n> Wouldn't mission critical databases normally be configured in a high\n> availability cluster - presumably with replicas running on different\n> power sources?\n>\n> If you lose power to a member of the cluster (or even the master), you\n> would have new data coming in and stuff to do long before it could\n> come back online - corrupted disk or not.\n>\n> I find it hard to imagine configuring something that is too critical\n> to be able to be restored from periodic backup to NOT be in a\n> (synchronous) cluster. I'm not sure all the fuss over whether an SSD\n> might come back after a hard server failure is really about. You\n> should architect the solution so you can lose the server and throw\n> it away and never bring it back online again. Native streaming\n> replication is fairly straightforward to configure. Asynchronous\n> multimaster (albeit with some synchronization latency) is also fairly\n> easy to configure using third party tools such as SymmetricDS.\n>\n> Agreed that adding a supercap doesn't sound like a hard thing for\n> a hardware manufacturer to do, but I don't think it should be a\n> necessarily be showstopper for being able to take advantage of some\n> awesome I/O performance opportunities.\n\nDo you want to recreate the server if it loses power over an extra $100\nper drive?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Mar 2013 14:55:08 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On Fri, Mar 15, 2013 at 12:06 PM, Rick Otten <[email protected]> wrote:\n>>> I not convinced about the need for BBU with SSD - you *can* use them\n>>> without one, just need to make sure about suitable longevity and also\n>>> the presence of (proven) power off protection (as discussed\n>>> previously). It is worth noting that using unproven or SSD known to be\n>>> lacking power off protection with a BBU will *not* save you from\n>>> massive corruption (or device failure) upon unexpected power loss.\n>\n>>I don't think any drive that corrupts on power-off is suitable for a database, but for non-db uses, sure, I guess they are OK, though you have to be pretty money->constrainted to like that tradeoff.\n>\n> Wouldn't mission critical databases normally be configured in a high availability cluster - presumably with replicas running on different power sources?\n\nI've worked in high end data centers where certain failures resulted\nin ALL power being lost. more than once. Relying on never losing\npower to keep your data from getting corrupted is not a good idea. Now\nif they're geographically separate you're maybe ok.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Mar 2013 13:14:17 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 16/03/13 07:06, Rick Otten wrote:\n>>> I not convinced about the need for BBU with SSD - you *can* use them\n>>> without one, just need to make sure about suitable longevity and also\n>>> the presence of (proven) power off protection (as discussed\n>>> previously). It is worth noting that using unproven or SSD known to be\n>>> lacking power off protection with a BBU will *not* save you from\n>>> massive corruption (or device failure) upon unexpected power loss.\n>\n>> I don't think any drive that corrupts on power-off is suitable for a database, but for non-db uses, sure, I guess they are OK, though you have to be pretty money->constrainted to like that tradeoff.\n>\n> Wouldn't mission critical databases normally be configured in a high availability cluster - presumably with replicas running on different power sources?\n>\n> If you lose power to a member of the cluster (or even the master), you would have new data coming in and stuff to do long before it could come back online - corrupted disk or not.\n>\n> I find it hard to imagine configuring something that is too critical to be able to be restored from periodic backup to NOT be in a (synchronous) cluster. I'm not sure all the fuss over whether an SSD might come back after a hard server failure is really about. You should architect the solution so you can lose the server and throw it away and never bring it back online again. Native streaming replication is fairly straightforward to configure. Asynchronous multimaster (albeit with some synchronization latency) is also fairly easy to configure using third party tools such as SymmetricDS.\n>\n> Agreed that adding a supercap doesn't sound like a hard thing for a hardware manufacturer to do, but I don't think it should be a necessarily be showstopper for being able to take advantage of some awesome I/O performance opportunities.\n>\n>\n\nA somewhat extreme point of view. I note that the Mongodb guys added \njournaling for single server reliability a while ago - an admission that \nwhile in *theory* lots of semi-reliable nodes can be eventually \nconsistent, it is a lot less hassle if individual nodes are as reliable \nas possible. That is what this discussion is about.\n\nRegards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 16 Mar 2013 21:47:34 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On Thu, Mar 14, 2013 at 4:37 PM, David Boreham <[email protected]> wrote:\n> You might want to evaluate the performance you can achieve with a single-SSD\n> (use several for capacity by all means) before considering a RAID card + SSD\n> solution.\n> Again I bet it depends on the application but our experience with the older\n> Intel 710 series is that their performance out-runs the CPU, at least under\n> our PG workload.\n\nHow many people are using a single enterprise grade SSD for production\nwithout RAID? I've had a few consumer grade SSDs brick themselves -\nbut are the enterprise grade SSDs, like the new Intel S3700 which you\ncan get in sizes up to 800GB, reliable enough to run as a single drive\nwithout RAID1? The performance of one is definitely good enough for\nmost medium sized workloads without the complexity of a BBU RAID and\nmultiple spinning disks...\n\n-Dave\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Mar 2013 17:44:36 -0700",
"msg_from": "David Rees <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 3/20/2013 6:44 PM, David Rees wrote:\n> On Thu, Mar 14, 2013 at 4:37 PM, David Boreham <[email protected]> wrote:\n>> You might want to evaluate the performance you can achieve with a single-SSD\n>> (use several for capacity by all means) before considering a RAID card + SSD\n>> solution.\n>> Again I bet it depends on the application but our experience with the older\n>> Intel 710 series is that their performance out-runs the CPU, at least under\n>> our PG workload.\n> How many people are using a single enterprise grade SSD for production\n> without RAID? I've had a few consumer grade SSDs brick themselves -\n> but are the enterprise grade SSDs, like the new Intel S3700 which you\n> can get in sizes up to 800GB, reliable enough to run as a single drive\n> without RAID1? The performance of one is definitely good enough for\n> most medium sized workloads without the complexity of a BBU RAID and\n> multiple spinning disks...\n>\n\nYou're replying to my post, but I'll raise my hand again :)\n\nWe run a bunch of single-socket 1U, short-depth machines (Supermicro \nchassis) using 1x Intel 710 drives (we'd use S3700 in new deployments \ntoday). The most recent of these have 128G and E5-2620 hex-core CPU and \ndissipate less than 150W at full-load.\n\nCouldn't be happier with the setup. We have 18 months up time with no \ndrive failures, running at several hundred wps 7x24. We also write 10's \nof GB of log files every day that are rotated, so the drives are getting \nbeaten up on bulk data overwrites too.\n\nThere is of course a non-zero probability of some unpleasant firmware \nbug afflicting the drives (as with regular spinning drives), and \ninitially we deployed a \"spare\" 10k HD in the chassis, spun-down, that \nwould allow us to re-jigger the machines without SSD remotely (the data \ncenter is 1000 miles away). We never had to do that, and later \ndeployments omitted the HD spare. We've also considered mixing SSD from \ntwo vendors for firmware-bug-diversity, but so far we only have one \napproved vendor (Intel).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Mar 2013 19:04:42 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 3/20/2013 7:44 PM, David Rees wrote:\n> On Thu, Mar 14, 2013 at 4:37 PM, David Boreham <[email protected]> wrote:\n>> You might want to evaluate the performance you can achieve with a single-SSD\n>> (use several for capacity by all means) before considering a RAID card + SSD\n>> solution.\n>> Again I bet it depends on the application but our experience with the older\n>> Intel 710 series is that their performance out-runs the CPU, at least under\n>> our PG workload.\n> How many people are using a single enterprise grade SSD for production\n> without RAID? I've had a few consumer grade SSDs brick themselves -\n> but are the enterprise grade SSDs, like the new Intel S3700 which you\n> can get in sizes up to 800GB, reliable enough to run as a single drive\n> without RAID1? The performance of one is definitely good enough for\n> most medium sized workloads without the complexity of a BBU RAID and\n> multiple spinning disks...\n>\n> -Dave\nTwo is one, one is none.\n:-)\n\n-\n-- Karl Denninger\n/The Market Ticker ®/ <http://market-ticker.org>\nCuda Systems LLC\n\n\n\n\n\n\n\nOn 3/20/2013 7:44 PM, David Rees wrote:\n\n\nOn Thu, Mar 14, 2013 at 4:37 PM, David Boreham <[email protected]> wrote:\n\n\nYou might want to evaluate the performance you can achieve with a single-SSD\n(use several for capacity by all means) before considering a RAID card + SSD\nsolution.\nAgain I bet it depends on the application but our experience with the older\nIntel 710 series is that their performance out-runs the CPU, at least under\nour PG workload.\n\n\n\nHow many people are using a single enterprise grade SSD for production\nwithout RAID? I've had a few consumer grade SSDs brick themselves -\nbut are the enterprise grade SSDs, like the new Intel S3700 which you\ncan get in sizes up to 800GB, reliable enough to run as a single drive\nwithout RAID1? The performance of one is definitely good enough for\nmost medium sized workloads without the complexity of a BBU RAID and\nmultiple spinning disks...\n\n-Dave\n\n\n Two is one, one is none. \n :-)\n\n - \n-- Karl Denninger\nThe Market Ticker ®\n Cuda Systems LLC",
"msg_date": "Wed, 20 Mar 2013 20:46:07 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On Wed, Mar 20, 2013 at 6:44 PM, David Rees <[email protected]> wrote:\n> On Thu, Mar 14, 2013 at 4:37 PM, David Boreham <[email protected]> wrote:\n>> You might want to evaluate the performance you can achieve with a single-SSD\n>> (use several for capacity by all means) before considering a RAID card + SSD\n>> solution.\n>> Again I bet it depends on the application but our experience with the older\n>> Intel 710 series is that their performance out-runs the CPU, at least under\n>> our PG workload.\n>\n> How many people are using a single enterprise grade SSD for production\n> without RAID? I've had a few consumer grade SSDs brick themselves -\n> but are the enterprise grade SSDs, like the new Intel S3700 which you\n> can get in sizes up to 800GB, reliable enough to run as a single drive\n> without RAID1? The performance of one is definitely good enough for\n> most medium sized workloads without the complexity of a BBU RAID and\n> multiple spinning disks...\n\nI would still at least run two in software RAID-1 for reliability.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Mar 2013 19:56:46 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
},
{
"msg_contents": "On 21/03/13 13:44, David Rees wrote:\n> On Thu, Mar 14, 2013 at 4:37 PM, David Boreham <[email protected]> wrote:\n>> You might want to evaluate the performance you can achieve with a single-SSD\n>> (use several for capacity by all means) before considering a RAID card + SSD\n>> solution.\n>> Again I bet it depends on the application but our experience with the older\n>> Intel 710 series is that their performance out-runs the CPU, at least under\n>> our PG workload.\n>\n> How many people are using a single enterprise grade SSD for production\n> without RAID? I've had a few consumer grade SSDs brick themselves -\n> but are the enterprise grade SSDs, like the new Intel S3700 which you\n> can get in sizes up to 800GB, reliable enough to run as a single drive\n> without RAID1? The performance of one is definitely good enough for\n> most medium sized workloads without the complexity of a BBU RAID and\n> multiple spinning disks...\n>\n\nIf you are using Intel S3700 or 710's you can certainly use a pair setup \nin software RAID1 (so avoiding the need for RAID cards and BBU etc).\n\nI'd certainly feel happier with 2 drives :-) . However, a setup using \nreplication with a number of hosts - each with a single SSD is going to \nbe ok.\n\nRegards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Mar 2013 15:26:07 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server setup"
}
] |
[
{
"msg_contents": "LSI MegaRAID SAS 9260-4i with four Intel SSDSC2CW240A3K5 SSDs OR four Hitachi Ultrastar 15K600 SAS drives?\n\nMy app is pretty write heavy and I have a lot of concurrent connections 300 - (though considering adding pgpool2 in front to increase throughput).\n\nRegards Niels Kristian\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Mar 2013 14:52:01 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "What setup would you choose for postgresql 9.2 installation?"
},
{
"msg_contents": "Apologies for the tangential question, but how would pgpool2 \"increase \nthroughput\"? Wouldn't the same number of statements be issued by your \napplication? It would likely reduce the number of concurrent \nconnections, but that doesn't necessarily equate to \"increased throughput\".\n\n-AJ\n\n\nOn 3/4/2013 8:52 AM, Niels Kristian Schj�dt wrote:\n> LSI MegaRAID SAS 9260-4i with four Intel SSDSC2CW240A3K5 SSDs OR four Hitachi Ultrastar 15K600 SAS drives?\n>\n> My app is pretty write heavy and I have a lot of concurrent connections 300 - (though considering adding pgpool2 in front to increase throughput).\n>\n> Regards Niels Kristian\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 04 Mar 2013 09:04:48 -0500",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What setup would you choose for postgresql 9.2 installation?"
},
{
"msg_contents": "\n\nOn 04/03/13 13:52, Niels Kristian Schjødt wrote:\n> LSI MegaRAID SAS 9260-4i with four Intel SSDSC2CW240A3K5 SSDs OR four Hitachi Ultrastar 15K600 SAS drives?\n>\n> My app is pretty write heavy and I have a lot of concurrent connections 300 - (though considering adding pgpool2 in front to increase throughput).\n>\n\nIf you can afford it, there's no question in my mind that SSDs are the \nway to go. They can be 1000 times faster for random reads.\n\nMay I suggest that you do some experiments though - perhaps with just \none disk of each type - you can get some pretty good illustrative tests \nwith ordinary SATA drives in an ordinary laptop/desktop (but not a USB \nadapter). I did this originally when evaluating the (then new) Intel X25 \nSSD.\n\nThe other things to note are:\n\n* The filesystem matters. For the important thing, fdatasync(), ext2 is \n2x as fast as ext4, which itself is much faster than ext3. BUT ext2's \nfsck is horrid, so we chose ext4.\n\n* Will you enable the disk (or RAID controller) write cache?\n\n* Have you enough RAM for your key tables (and indexes) to fit in \nmemory? If not, 64GB of RAM is cheap these days.\n\n* In some applications, you can get a speed boost by turning \nsynchronous_commit off - this would mean that in a database crash, the \nlast few seconds are potentially lost, even through they application \nthinks they were committed. You may find this an acceptable tradeoff.\n\n* Postgres doesn't always write straight to the tables, but uses the WAL \n(write-ahead-log). So the benefit of SSD performance for \"random writes\" \nis less relevant than for \"random reads\".\n\n\nLastly, don't overdo the concurrent connections. You may end up with \nless thoughput than if you let postgres devote more resources to each \nrequest and let it finish faster.\n\n\nHope that helps,\n\nRichard\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 04 Mar 2013 14:18:00 +0000",
"msg_from": "Richard Neill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What setup would you choose for postgresql 9.2 installation?"
},
{
"msg_contents": "On Mon, Mar 4, 2013 at 7:04 AM, AJ Weber <[email protected]> wrote:\n> Apologies for the tangential question, but how would pgpool2 \"increase\n> throughput\"? Wouldn't the same number of statements be issued by your\n> application? It would likely reduce the number of concurrent connections,\n> but that doesn't necessarily equate to \"increased throughput\".\n\nThis is a pretty common subject. Most servers have a \"peak\nthroughput\" that occurs at some fixed number of connections. for\ninstance a common throughput graph of pgbench on a server might look\nlike this:\n\nconns : tps\n1 : 200\n2 : 250\n4 : 400\n8 : 750\n12 : 1200\n16 : 2000\n24 : 2200\n28 : 2100\n32 : 2000\n40 : 1800\n64 : 1200\n80 : 800\n100 : 400\n\nSo by concentrating your connections to be ~24 you would get maximum\nthroughput. Such a graph is typical for most db servers, just a\ndifferent \"sweet spot\" where the max throughput for a given number of\nconnections. Some servers fall off fast past this number, some just\nslowly drop off.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Mar 2013 07:36:53 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What setup would you choose for postgresql 9.2 installation?"
},
{
"msg_contents": "Great info, I really appreciate the insight. Is there a FAQ/recommended \nsetup for running pgbench to determine where this might be? (Is there a \nreason to setup pgbench differently based on the server's cores/memory/etc?)\n\nSorry if this detracts from the OP's original question.\n\n-AJ\n\nOn 3/4/2013 9:36 AM, Scott Marlowe wrote:\n> On Mon, Mar 4, 2013 at 7:04 AM, AJ Weber<[email protected]> wrote:\n>> Apologies for the tangential question, but how would pgpool2 \"increase\n>> throughput\"? Wouldn't the same number of statements be issued by your\n>> application? It would likely reduce the number of concurrent connections,\n>> but that doesn't necessarily equate to \"increased throughput\".\n> This is a pretty common subject. Most servers have a \"peak\n> throughput\" that occurs at some fixed number of connections. for\n> instance a common throughput graph of pgbench on a server might look\n> like this:\n>\n> conns : tps\n> 1 : 200\n> 2 : 250\n> 4 : 400\n> 8 : 750\n> 12 : 1200\n> 16 : 2000\n> 24 : 2200\n> 28 : 2100\n> 32 : 2000\n> 40 : 1800\n> 64 : 1200\n> 80 : 800\n> 100 : 400\n>\n> So by concentrating your connections to be ~24 you would get maximum\n> throughput. Such a graph is typical for most db servers, just a\n> different \"sweet spot\" where the max throughput for a given number of\n> connections. Some servers fall off fast past this number, some just\n> slowly drop off.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 04 Mar 2013 09:43:51 -0500",
"msg_from": "AJ Weber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What setup would you choose for postgresql 9.2 installation?"
},
{
"msg_contents": "On Mon, Mar 4, 2013 at 7:43 AM, AJ Weber <[email protected]> wrote:\n> Great info, I really appreciate the insight. Is there a FAQ/recommended\n> setup for running pgbench to determine where this might be? (Is there a\n> reason to setup pgbench differently based on the server's cores/memory/etc?)\n\nWell keep in mind that pgbench may or may not represent your real\nload. However it can be used with custom sql scripts to run a\ndifferent load than that which it runs by default so you can get some\nidea of where your peak connections / throughput sits. And let's face\nit that if you're currently running 500 connections and your peak\noccurs at 64 then a pooler is gonna make a difference whether you set\nit to 64 or 100 or 48 etc.\n\nThe basic starting point on pgbench is to use a scale factor of at\nleast 2x however many connections you'll be testing. You can also do\nread only transactions to get an idea of what the peak number of\nconnections are for read only versus read/write transactions. If read\nonly transactions peak at say 100 while r/w peak at 24, and your app\nis 95% read, then you're probably pretty safe setting a pooler to ~100\nconns instead of the lower 24. If your app is set to have one pool\nfor read only stuff (say reporting) and another for r/w then you can\nsetup two different poolers but that's a bit of added complexity you\nmay not really need.\n\nThe real danger with lots of connections comes from having lots and\nlots of idle connections. Let's say you've got 1000 connections and\n950 are idle. Then the server gets a load spike and queries start\npiling up. Suddenly instead of ~50 active connections that number\nstarts to climb to 100, 200, 300 etc. Given the slower throughput most\nservers see as the number of active connections climbs your server may\nslow to a crawl and never recover til you remove load.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Mar 2013 08:23:44 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What setup would you choose for postgresql 9.2 installation?"
},
{
"msg_contents": "That's around the behavior I'm seeing - I'll be testing tonight! :-)\n\n\nDen 04/03/2013 kl. 16.23 skrev Scott Marlowe <[email protected]>:\n\n> however\n\n\nThat's around the behavior I'm seeing - I'll be testing tonight! :-)Den 04/03/2013 kl. 16.23 skrev Scott Marlowe <[email protected]>:however",
"msg_date": "Mon, 4 Mar 2013 18:25:10 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What setup would you choose for postgresql 9.2 installation?"
}
] |
[
{
"msg_contents": "Good Afternoon,\n\nWe are having a performance issue with our views in PostgreSQL and based\non the requirements for assistance you recommend providing the full\ntable and index schema besides additional information from this site.\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nWe have been unsuccessful in finding a tool to obfuscate the schema\nbefore providing it for review, is there an open source tool you\nrecommend to use?\n\nThanks!\n\n\n\n\n\n\n\n\nGood Afternoon, \nWe are having a performance issue with our\n views in\n PostgreSQL and based on the requirements for assistance you\n recommend providing\n the full table and index schema besides additional information\n from this site.\n https://wiki.postgresql.org/wiki/Slow_Query_Questions\nWe have been unsuccessful in finding a tool to\n obfuscate the\n schema before providing it for review, is there an open source\n tool you\n recommend to use?\nThanks!",
"msg_date": "Mon, 04 Mar 2013 15:31:42 -0600",
"msg_from": "Joseph Pravato <[email protected]>",
"msg_from_op": true,
"msg_subject": "Schema obfuscator for performance question"
},
{
"msg_contents": "2013/3/4 Joseph Pravato <[email protected]>\n\n> We are having a performance issue with our views in PostgreSQL and based\n> on the requirements for assistance you recommend providing the full table\n> and index schema besides additional information from this site.\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> **\n>\n> We have been unsuccessful in finding a tool to obfuscate the schema before\n> providing it for review, is there an open source tool you recommend to use?\n>\n\nYou can anonymize EXPLAIN output here: http://explain.depesz.com/\n\nNot sure bout the schema...\n\n\n-- \nVictor Y. Yegorov\n\n2013/3/4 Joseph Pravato <[email protected]>\n\nWe are having a performance issue with our\n views in\n PostgreSQL and based on the requirements for assistance you\n recommend providing\n the full table and index schema besides additional information\n from this site.\n https://wiki.postgresql.org/wiki/Slow_Query_Questions\nWe have been unsuccessful in finding a tool to\n obfuscate the\n schema before providing it for review, is there an open source\n tool you\n recommend to use?You can anonymize EXPLAIN output here: http://explain.depesz.com/Not sure bout the schema... \n-- Victor Y. Yegorov",
"msg_date": "Mon, 4 Mar 2013 23:37:22 +0200",
"msg_from": "Victor Yegorov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Schema obfuscator for performance question"
},
{
"msg_contents": "On Mon, Mar 4, 2013 at 6:31 PM, Joseph Pravato\n<[email protected]> wrote:\n> Good Afternoon,\n>\n> We are having a performance issue with our views in PostgreSQL and based on\n> the requirements for assistance you recommend providing the full table and\n> index schema besides additional information from this site.\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> We have been unsuccessful in finding a tool to obfuscate the schema before\n> providing it for review, is there an open source tool you recommend to use?\n>\n> Thanks!\n\nWithout getting into the ugly business of asking why you'd want to\nobfuscate schema, I can imagine a few (wrong) answers, I'll start by\nnoting that schema transmits meaning in the very names used, so\nobfuscation, especially automatic and mindless obfuscation, will make\nthe problem a lot harder to review, because it will be a lot harder to\nunderstand the schema.\n\nSo... I'd suggest, if you need obfuscation, that it should be done\nmanually, so that you may preserve meaningful names.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 4 Mar 2013 18:37:23 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Schema obfuscator for performance question"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm running a rails app, where I have a model called Car that has_many Images. Now when I tell rails to include those images, when querying say 50 cars, then it often decides to use a SELECT * from images WHERE car_id IN (id1,id2,id3,id4…) instead of doing a join. \n\nNow either way it uses the index I have on car_id:\n\nIndex Scan using car_id_ix on adverts (cost=0.47..5665.34 rows=1224 width=234)\n\tIndex Cond: (car_id = ANY ('{7097561,7253541,5159633,6674471,...}'::integer[]))\n\nBut it's slow, it's very slow. In this case it took 3,323ms\n\nCan I do anything to optimize that query or maybe the index or something?\n\nThe table has 16.000.000 rows\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Mar 2013 15:00:48 +0100",
"msg_from": "=?windows-1252?Q?Niels_Kristian_Schj=F8dt?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimize SELECT * from table WHERE foreign_key_id IN (key1, key2,\n key3,\n\tkey4...)"
},
{
"msg_contents": "On 03/05/2013 15:00, Niels Kristian Schj�dt wrote:\n> Hi,\n>\n> I'm running a rails app, where I have a model called Car that has_many Images. Now when I tell rails to include those images, when querying say 50 cars, then it often decides to use a SELECT * from images WHERE car_id IN (id1,id2,id3,id4�) instead of doing a join.\n\nwhy do you want a join here ? if you don't need any \"cars\" data there is \nno need to JOIN that table.\nNow a select ... from ... where id in (id1, id2, ..., idn) isn't very \nscalable.\n\nInstead of passing id1, id2, ..., idn you'be better pass the condition \nand do a where id in (select ... ), or where exists (select 1 ... where \n...), or a join, or ...\n\n> Now either way it uses the index I have on car_id:\n>\n> Index Scan using car_id_ix on adverts (cost=0.47..5665.34 rows=1224 width=234)\n> \tIndex Cond: (car_id = ANY ('{7097561,7253541,5159633,6674471,...}'::integer[]))\n>\n> But it's slow, it's very slow. In this case it took 3,323ms\n\n3ms isn't slow\n\n> Can I do anything to optimize that query or maybe the index or something?\n\nyour index is already used\n\n> The table has 16.000.000 rows\n>\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 05 Mar 2013 15:26:44 +0100",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize SELECT * from table WHERE foreign_key_id IN\n\t(key1,key2,key3,key4...)"
},
{
"msg_contents": "Hi, thanks for answering. See comments inline.\n\nDen 05/03/2013 kl. 15.26 skrev Julien Cigar <[email protected]>:\n\n> On 03/05/2013 15:00, Niels Kristian Schjødt wrote:\n>> Hi,\n>> \n>> I'm running a rails app, where I have a model called Car that has_many Images. Now when I tell rails to include those images, when querying say 50 cars, then it often decides to use a SELECT * from images WHERE car_id IN (id1,id2,id3,id4…) instead of doing a join.\n> \n> why do you want a join here ? if you don't need any \"cars\" data there is no need to JOIN that table.\nI need both\n> Now a select ... from ... where id in (id1, id2, ..., idn) isn't very scalable.\n> \n> Instead of passing id1, id2, ..., idn you'be better pass the condition and do a where id in (select ... ), or where exists (select 1 ... where ...), or a join, or …\n> \nI tried this now, and it doesn't seem to do a very big difference unfortunately…\n\n>> Now either way it uses the index I\n>> have on car_id:\n>> \n>> Index Scan using car_id_ix on adverts (cost=0.47..5665.34 rows=1224 width=234)\n>> \tIndex Cond: (car_id = ANY ('{7097561,7253541,5159633,6674471,...}'::integer[]))\n>> \n>> But it's slow, it's very slow. In this case it took 3,323ms\n> \n> 3ms isn't slow\n> \nSorry, it's 3323ms!\n\n>> Can I do anything to optimize that query or maybe the index or something?\n> \n> your index is already used\n\nOkay this leaves me with - \"get better hardware\" or?\n\n> \n>> The table has 16.000.000 rows\n>> \n> \n> \n> -- \n> No trees were killed in the creation of this message.\n> However, many electrons were terribly inconvenienced.\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Mar 2013 00:51:42 +0100",
"msg_from": "=?windows-1252?Q?Niels_Kristian_Schj=F8dt?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimize SELECT * from table WHERE foreign_key_id IN (key1, key2,\n\tkey3, key4...)"
},
{
"msg_contents": "\nOn 03/05/2013 03:51 PM, Niels Kristian Schjødt wrote:\n\n>> 3ms isn't slow\n>>\n> Sorry, it's 3323ms!\n>\n>>> Can I do anything to optimize that query or maybe the index or something?\n>>\n>> your index is already used\n>\n> Okay this leaves me with - \"get better hardware\" or?\n\nWhat does explain analyze say versus just explain.\n\nJD\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC\n@cmdpromptinc - 509-416-6579\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 05 Mar 2013 16:07:21 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize SELECT * from table WHERE foreign_key_id IN\n\t(key1,key2,key3,key4...)"
},
{
"msg_contents": "On 03/06/2013 00:51, Niels Kristian Schj�dt wrote:\n> Hi, thanks for answering. See comments inline.\n>\n> Den 05/03/2013 kl. 15.26 skrev Julien Cigar <[email protected]>:\n>\n>> On 03/05/2013 15:00, Niels Kristian Schj�dt wrote:\n>>> Hi,\n>>>\n>>> I'm running a rails app, where I have a model called Car that has_many Images. Now when I tell rails to include those images, when querying say 50 cars, then it often decides to use a SELECT * from images WHERE car_id IN (id1,id2,id3,id4�) instead of doing a join.\n>> why do you want a join here ? if you don't need any \"cars\" data there is no need to JOIN that table.\n> I need both\n>> Now a select ... from ... where id in (id1, id2, ..., idn) isn't very scalable.\n>>\n>> Instead of passing id1, id2, ..., idn you'be better pass the condition and do a where id in (select ... ), or where exists (select 1 ... where ...), or a join, or �\n>>\n> I tried this now, and it doesn't seem to do a very big difference unfortunately�\n\ncould you paste the full query, an explain analyze of it, and some \ndetails about your config (how much ram ? what's your: shared_buffers, \neffective_cache_size, cpu_tuple_cost, work_mem, ...) ?\n\n>>> Now either way it uses the index I\n>>> have on car_id:\n>>>\n>>> Index Scan using car_id_ix on adverts (cost=0.47..5665.34 rows=1224 width=234)\n>>> \tIndex Cond: (car_id = ANY ('{7097561,7253541,5159633,6674471,...}'::integer[]))\n>>>\n>>> But it's slow, it's very slow. In this case it took 3,323ms\n>> 3ms isn't slow\n>>\n> Sorry, it's 3323ms!\n>\n>>> Can I do anything to optimize that query or maybe the index or something?\n>> your index is already used\n> Okay this leaves me with - \"get better hardware\" or?\n>\n>>> The table has 16.000.000 rows\n>>>\n>>\n>> -- \n>> No trees were killed in the creation of this message.\n>> However, many electrons were terribly inconvenienced.\n>>\n>>\n>>\n>> -- \n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 06 Mar 2013 13:33:05 +0100",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize SELECT * from table WHERE foreign_key_id IN\n\t(key1,key2,key3,key4...)"
},
{
"msg_contents": "On Tue, Mar 5, 2013 at 4:07 PM, Joshua D. Drake <[email protected]>wrote:\n\n>\n> On 03/05/2013 03:51 PM, Niels Kristian Schjødt wrote:\n>\n> 3ms isn't slow\n>>>\n>>> Sorry, it's 3323ms!\n>>\n>> Can I do anything to optimize that query or maybe the index or something?\n>>>>\n>>>\n>>> your index is already used\n>>>\n>>\n>> Okay this leaves me with - \"get better hardware\" or?\n>>\n>\n> What does explain analyze say versus just explain.\n>\n\n\nBetter yet, \"explain (analyze, buffers)\" with track_io_timing turned on.\n\nCheers,\n\nJeff\n\nOn Tue, Mar 5, 2013 at 4:07 PM, Joshua D. Drake <[email protected]> wrote:\n\nOn 03/05/2013 03:51 PM, Niels Kristian Schjødt wrote:\n\n\n3ms isn't slow\n\n\nSorry, it's 3323ms!\n\n\nCan I do anything to optimize that query or maybe the index or something?\n\n\nyour index is already used\n\n\nOkay this leaves me with - \"get better hardware\" or?\n\n\nWhat does explain analyze say versus just explain.Better yet, \"explain (analyze, buffers)\" with track_io_timing turned on.\nCheers,Jeff",
"msg_date": "Wed, 6 Mar 2013 08:44:00 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimize SELECT * from table WHERE foreign_key_id IN (key1, key2,\n\tkey3, key4...)"
}
] |
[
{
"msg_contents": "I was hoping to just get a \"gut reaction\" on some pgbench numbers I have, to\nsee if I'm in the ballpark.\n\nOS: ScientificLinux 6.3, x86_64\nHardware: 4x real disks (not SSD) behind an LSI 9260 in raid10, Xeon E5-2680\nwith hyperthreading OFF, 128GB of RAM.\nSetup: postgresql 8.4.13, ext4, barriers ON, disk write cache *off*, write-\nback enabled on the LSI.\nI initialized with sizes of 100, 200, and 400.\n\nI've done some tuning of the postgresql config, but mostly I'm just trying to\nfind out if I'm in the right ballpark.\n\nI ran pgbench from another (similar) host:\n\npgbench -h BLAH -c 32 -M prepared -t 100000 -S\nI get 95,000 to 100,000 tps.\n\npgbench -h BLAH -c 32 -M prepared -t 100000\nseems to hover around 6,200 tps (size 100) to 13,700 (size 400)\n\nDo these basically sniff right?\n(NOTE: with barriers off, I get a slight increase - 10% - in the\nread-write test, and a larger *decrease* - 15% - with the read-only\ntest @ 400. No change @ 100)\n\n-- \nJon\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Mar 2013 13:35:51 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "sniff test on some PG 8.4 numbers"
},
{
"msg_contents": "\n> Do these basically sniff right?\n\nWell, the read test seems reasonable. I'm impressed by the speed of the\nwrite test ... how large is the raid card cache?\n\nAnd why 8.4? Can you try 9.2?\n\n> (NOTE: with barriers off, I get a slight increase - 10% - in the\n> read-write test, and a larger *decrease* - 15% - with the read-only\n> test @ 400. No change @ 100)\n\nOh, interesting. Can you reproduce that? I wonder what would cause\nread-only to drop without barriers ...\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 05 Mar 2013 17:02:07 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sniff test on some PG 8.4 numbers"
},
{
"msg_contents": "On Tue, Mar 5, 2013 at 7:02 PM, Josh Berkus <[email protected]> wrote:\n>\n>> Do these basically sniff right?\n>\n> Well, the read test seems reasonable. I'm impressed by the speed of the\n> write test ... how large is the raid card cache?\n>\n> And why 8.4? Can you try 9.2?\n\n8.4 because it's what I've got, basically. I might be able to try 9.2\nlater, but I'm targeting 8.4 right now.\n512MB of memory on the card.\n\n>> (NOTE: with barriers off, I get a slight increase - 10% - in the\n>> read-write test, and a larger *decrease* - 15% - with the read-only\n>> test @ 400. No change @ 100)\n>\n> Oh, interesting. Can you reproduce that? I wonder what would cause\n> read-only to drop without barriers ...\n\nI'll try to test again soon.\nI know that if I use writethrough instead of writeback mode the\nperformance nosedives.\nDoes anybody have suggestions for stripe size? (remember: *4* disks)\n\n-- \nJon\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Mar 2013 20:13:10 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sniff test on some PG 8.4 numbers"
},
{
"msg_contents": "On Tue, Mar 5, 2013 at 1:35 PM, Jon Nelson <[email protected]> wrote:\n>\n> pgbench -h BLAH -c 32 -M prepared -t 100000 -S\n> I get 95,000 to 100,000 tps.\n>\n> pgbench -h BLAH -c 32 -M prepared -t 100000\n> seems to hover around 6,200 tps (size 100) to 13,700 (size 400)\n\nSome followup:\nThe read test goes (up to) 133K tps, and the read-write test to 22k\ntps when performed over localhost.\n\n-- \nJon\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Mar 2013 21:00:30 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sniff test on some PG 8.4 numbers"
},
{
"msg_contents": "On 3/5/13 10:00 PM, Jon Nelson wrote:\n> On Tue, Mar 5, 2013 at 1:35 PM, Jon Nelson <[email protected]> wrote:\n>>\n>> pgbench -h BLAH -c 32 -M prepared -t 100000 -S\n>> I get 95,000 to 100,000 tps.\n>>\n>> pgbench -h BLAH -c 32 -M prepared -t 100000\n>> seems to hover around 6,200 tps (size 100) to 13,700 (size 400)\n>\n> Some followup:\n> The read test goes (up to) 133K tps, and the read-write test to 22k\n> tps when performed over localhost.\n\nAll your write numbers are inflated because the test is too short. This \nhardware will be lucky to sustain 7500 TPS on writes. But you're only \nwriting 100,000 transactions, which means the entire test run isn't even \nhitting the database--only the WAL writes are. When your test run is \nfinished, look at /proc/meminfo I'd wager a large sum you'll find \n\"Dirty:\" has hundreds of megabytes, if not gigabytes, of unwritten \ninformation. Basically, 100,000 writes on this sort of server can all \nbe cached in Linux's write cache, and pgbench won't force them out of \nthere. So you're not simulating sustained database writes, only how \nfast of a burst the server can handle for a little bit.\n\nFor a write test, you must run for long enough to start and complete a \ncheckpoint before the numbers are of any use, and 2 checkpoints are even \nbetter. The minimum useful length is a 10 minute run, so \"-T 600\" \ninstead of using -t. If you want something that does every trick \npossible to make it hard to cheat at this, as well as letting you graph \nsize and client data, try my pgbench-tools: \nhttps://github.com/gregs1104/pgbench-tools (Note that there is a bug in \nthat program right now, it spawns vmstat and iostat processes but they \ndon't get killed at the end correctly. \"killall vmstat iostat\" after \nrunning is a good idea until I fix that).\n\nYour read test numbers are similarly inflated, but read test errors \naren't as large. Around 133K TPS on select-only is probably accurate. \nFor a read test, use \"-T 30\" to let it run for 30 seconds to get a more \naccurate number. The read read bottleneck on your hardware is going to \nbe the pgbench client itself, which on 8.4 is running as a single \nthread. On 9.0+ you can have multiple pgbench workers. It normally \ntakes 4 to 8 of them to saturate a larger server.\n\nI hope you're not considering deploying a new application with 8.4. \nTake a look at http://www.postgresql.org/support/versioning/ and you'll \nsee 8.4 only has a little over a year before it won't get bug fixes \nanymore. Also, your server would really appreciate the performance \ngains added to 9.2. If that's a bit too leading edge for you, I don't \nrecommend deploying at version below 9.1 anymore.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 10 Mar 2013 11:46:24 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sniff test on some PG 8.4 numbers"
},
{
"msg_contents": "On Sun, Mar 10, 2013 at 10:46 AM, Greg Smith <[email protected]> wrote:\n> On 3/5/13 10:00 PM, Jon Nelson wrote:\n>>\n>> On Tue, Mar 5, 2013 at 1:35 PM, Jon Nelson <[email protected]>\n>> wrote:\n>>>\n>>>\n>>> pgbench -h BLAH -c 32 -M prepared -t 100000 -S\n>>> I get 95,000 to 100,000 tps.\n>>>\n>>> pgbench -h BLAH -c 32 -M prepared -t 100000\n>>> seems to hover around 6,200 tps (size 100) to 13,700 (size 400)\n>>\n>>\n>> Some followup:\n>> The read test goes (up to) 133K tps, and the read-write test to 22k\n>> tps when performed over localhost.\n>\n>\n> All your write numbers are inflated because the test is too short. This\n> hardware will be lucky to sustain 7500 TPS on writes. But you're only\n> writing 100,000 transactions, which means the entire test run isn't even\n> hitting the database--only the WAL writes are. When your test run is\n> finished, look at /proc/meminfo I'd wager a large sum you'll find \"Dirty:\"\n> has hundreds of megabytes, if not gigabytes, of unwritten information.\n> Basically, 100,000 writes on this sort of server can all be cached in\n> Linux's write cache, and pgbench won't force them out of there. So you're\n> not simulating sustained database writes, only how fast of a burst the\n> server can handle for a little bit.\n>\n> For a write test, you must run for long enough to start and complete a\n> checkpoint before the numbers are of any use, and 2 checkpoints are even\n> better. The minimum useful length is a 10 minute run, so \"-T 600\" instead\n> of using -t. If you want something that does every trick possible to make\n> it hard to cheat at this, as well as letting you graph size and client data,\n> try my pgbench-tools: https://github.com/gregs1104/pgbench-tools (Note that\n> there is a bug in that program right now, it spawns vmstat and iostat\n> processes but they don't get killed at the end correctly. \"killall vmstat\n> iostat\" after running is a good idea until I fix that).\n\nI (briefly!) acquired an identical machine as last but this time with\nan Areca instead of an LSI (4 drives).\n\nThe following is with ext4, nobarrier, and noatime. As noted in the\noriginal post, I have done a fair bit of system tuning. I have the\ndirty_bytes and dirty_background_bytes set to 3GB and 2GB,\nrespectively.\n\nI built 9.2 and using 9.2 and the following pgbench invocation:\n\npgbench -j 8 -c 32 -M prepared -T 600\n\ntransaction type: TPC-B (sort of)\nscaling factor: 400\nquery mode: prepared\nnumber of clients: 32\nnumber of threads: 8\nduration: 600 s\nnumber of transactions actually processed: 16306693\ntps = 27176.566608 (including connections establishing)\ntps = 27178.518841 (excluding connections establishing)\n\n> Your read test numbers are similarly inflated, but read test errors aren't\n> as large. Around 133K TPS on select-only is probably accurate. For a read\n> test, use \"-T 30\" to let it run for 30 seconds to get a more accurate\n> number. The read read bottleneck on your hardware is going to be the\n> pgbench client itself, which on 8.4 is running as a single thread. On 9.0+\n> you can have multiple pgbench workers. It normally takes 4 to 8 of them to\n> saturate a larger server.\n\nThe 'select-only' test (same as above with '-S'):\n\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 400\nquery mode: prepared\nnumber of clients: 32\nnumber of threads: 8\nduration: 600 s\nnumber of transactions actually processed: 127513307\ntps = 212514.337971 (including connections establishing)\ntps = 212544.392278 (excluding connections establishing)\n\nThese are the *only* changes I've made to the config file:\n\nshared_buffers = 32GB\nwal_buffers = 16MB\ncheckpoint_segments = 1024\n\nI can run either or both of these again with different options, but\nmostly I'm looking for a sniff test.\nHowever, I'm a bit confused, now.\n\nIt seems as though you say the write numbers are not believable,\nsuggesting a value of 7,500 (roughly 1/4 what I'm getting). If I run\nthe read test for 30 seconds I get - highly variable - between 300K\nand 400K tps. Why are these tps so high compared to your expectations?\nNote: I did get better results with HT on vs. with HT off, so I've\nleft HT on for now.\n\n\n\n-- \nJon\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 10 Mar 2013 20:18:09 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sniff test on some PG 8.4 numbers"
},
{
"msg_contents": "On Sun, Mar 10, 2013 at 7:18 PM, Jon Nelson <[email protected]> wrote:\n> It seems as though you say the write numbers are not believable,\n> suggesting a value of 7,500 (roughly 1/4 what I'm getting). If I run\n> the read test for 30 seconds I get - highly variable - between 300K\n> and 400K tps. Why are these tps so high compared to your expectations?\n> Note: I did get better results with HT on vs. with HT off, so I've\n> left HT on for now.\n\ngo back and re-read greg's post. He explains why he thinks you'll\nsustain less. Basically it's caching effects because no pg_xlog / wal\nlog writing happening. Once you get a feel for how fast it is, run\nthe test for 30 minutes to several hours to see how it goes. Then when\nyou have a weekend just leave it running a couple days. Still pretty\ngood numbers so far.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 10 Mar 2013 21:02:00 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sniff test on some PG 8.4 numbers"
},
{
"msg_contents": "On 3/10/13 9:18 PM, Jon Nelson wrote:\n\n> The following is with ext4, nobarrier, and noatime. As noted in the\n> original post, I have done a fair bit of system tuning. I have the\n> dirty_bytes and dirty_background_bytes set to 3GB and 2GB,\n> respectively.\n\nThat's good, but be aware those values are still essentially unlimited \nwrite caches. A server with 4 good but regular hard drives might do as \nlittle as 10MB/s of random writes on a real workload. If 2GB of data \nends up dirty, the flushing that happens at the end of a database \ncheckpoint will need to clear all of that out of RAM. When that \nhappens, you're looking at a 3 minute long cache flush to push out 2GB. \n It's not unusual for pgbench tests to pause for over a minute straight \nwhen that happens. With your setup, where checkpoints happen every 5 \nminutes, this is only happening once per test run. The disruption isn't \neasily visible if you look at the average rate; it's outweighed by the \nperiods where writes happen very fast because the cache isn't full yet. \n You have to get pgbench to plot latency over time to see them and then \nanalyze that data. This problem is the main reason I put together the \npgbench-tools set for running things, because once you get to processing \nthe latency files and make graphs from them it starts to be a pain to \nlook at the results.\n\n> I built 9.2 and using 9.2 and the following pgbench invocation:\n>\n> pgbench -j 8 -c 32 -M prepared -T 600\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 400\n\nI misread this completely in your message before; I thought you wrote \n4000. A scaling factor of 400 is making a database that's 6GB in size. \n Your test is basically seeing how fast the system memory and the RAID \ncache can move things around. In that situation, your read and write \nnumbers are reasonable. They aren't actually telling you anything \nuseful about the disks though, because they're barely involved here. \nYou've sniffed the CPU, memory, and RAID controller and they smell fine. \n You'll need at least an order of magnitude increase in scale to get a \nwhiff of the disks.\n\npgbench scale numbers give approximately 16MB per scale factor. You \ndon't actually stress the drives until that total number is at least 2X \nas big as RAM. We had to raise the limit on the pgbench scales recently \nbecause it only goes up to ~20,000 on earlier versions, and that's not a \nbig enough scale to test many servers now.\n\nOn the select-only tests, much of the increase from ~100K to ~200K is \nprobably going from 8.4 to 9.2. There's two major and several minor \ntuning changes that make it much more efficient at that specific task.\n\n> These are the *only* changes I've made to the config file:\n>\n> shared_buffers = 32GB\n> wal_buffers = 16MB\n> checkpoint_segments = 1024\n\nNote that these are the only changes that actually impact pgbench \nresults. The test doesn't stress very many parts of the system, such as \nthe query optimizer.\n\nAlso be aware these values may not be practical to use in production. \nYou can expect bad latency issues due to having shared_buffers so large. \n All that memory has to be reconciled and written to disk if it's been \nmodified at each checkpoint, and 32GB of such work is a lot. I have \nsystems where we can't make shared_buffers any bigger than 4GB before \ncheckpoint pauses get too bad.\n\nSimilarly, setting checkpoint_segments to 1024 means that you might go \nthrough 16GB of writes before a checkpoint happens. That's great for \naverage performance...but when that checkpoint does hit, you're facing a \nlarge random I/O backlog.\n\nThere's not much you can do about all this on the Linux side. If you \ndrop the dirty_* parameters too much, maintenance operations like VACUUM \nstart to get slow. Really all you can do is avoid setting \nshared_buffers and checkpoint_segments too high, so the checkpoint \nbacklog never gets gigantic. The tuning you've done is using higher \nvalues than we normally recommend because it's not quite practical to \ndeploy like that. That and the very small database are probably why \nyour numbers are so high.\n\n> Note: I did get better results with HT on vs. with HT off, so I've\n> left HT on for now.\n\npgbench select-only in particular does like hyper-threading. We get \noccasional reports of more memory-bound workloads actually slowing when \nit's turned on. I think it's a wash and leave it on. Purchasing and \nmanagement people tend to get annoyed if they discover the core count of \nthe server is half what they thought they were buying. The potential \ndownside of HT isn't so big that its worth opening that can of worms, \nunless you've run real application level tests to prove it hurts.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Mar 2013 00:28:18 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sniff test on some PG 8.4 numbers"
},
{
"msg_contents": "On Sun, Mar 10, 2013 at 11:28 PM, Greg Smith <[email protected]> wrote:\n> On 3/10/13 9:18 PM, Jon Nelson wrote:\n>\n>> The following is with ext4, nobarrier, and noatime. As noted in the\n>> original post, I have done a fair bit of system tuning. I have the\n>> dirty_bytes and dirty_background_bytes set to 3GB and 2GB,\n>> respectively.\n>\n>\n> That's good, but be aware those values are still essentially unlimited write\n> caches. A server with 4 good but regular hard drives might do as little as\n> 10MB/s of random writes on a real workload. If 2GB of data ends up dirty,\n> the flushing that happens at the end of a database checkpoint will need to\n> clear all of that out of RAM. When that happens, you're looking at a 3\n> minute long cache flush to push out 2GB. It's not unusual for pgbench tests\n> to pause for over a minute straight when that happens. With your setup,\n> where checkpoints happen every 5 minutes, this is only happening once per\n> test run. The disruption isn't easily visible if you look at the average\n> rate; it's outweighed by the periods where writes happen very fast because\n> the cache isn't full yet. You have to get pgbench to plot latency over time\n> to see them and then analyze that data. This problem is the main reason I\n> put together the pgbench-tools set for running things, because once you get\n> to processing the latency files and make graphs from them it starts to be a\n> pain to look at the results.\n\nI'll try to find time for this, but it may need to wait until the weekend again.\n\n>> I built 9.2 and using 9.2 and the following pgbench invocation:\n>>\n>> pgbench -j 8 -c 32 -M prepared -T 600\n>>\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 400\n>\n>\n> I misread this completely in your message before; I thought you wrote 4000.\n> A scaling factor of 400 is making a database that's 6GB in size. Your test\n> is basically seeing how fast the system memory and the RAID cache can move\n> things around. In that situation, your read and write numbers are\n> reasonable. They aren't actually telling you anything useful about the\n> disks though, because they're barely involved here. You've sniffed the CPU,\n> memory, and RAID controller and they smell fine. You'll need at least an\n> order of magnitude increase in scale to get a whiff of the disks.\n\nLOL! Your phrasing is humourous and the information useful.\n\nI ran for 8.0 hours and go this:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 400\nquery mode: prepared\nnumber of clients: 32\nnumber of threads: 8\nduration: 28800 s\nnumber of transactions actually processed: 609250619\ntps = 21154.058025 (including connections establishing)\ntps = 21154.075922 (excluding connections establishing)\n\n> pgbench scale numbers give approximately 16MB per scale factor. You don't\n> actually stress the drives until that total number is at least 2X as big as\n> RAM. We had to raise the limit on the pgbench scales recently because it\n> only goes up to ~20,000 on earlier versions, and that's not a big enough\n> scale to test many servers now.\n>\n> On the select-only tests, much of the increase from ~100K to ~200K is\n> probably going from 8.4 to 9.2. There's two major and several minor tuning\n> changes that make it much more efficient at that specific task.\n>\n>\n>> These are the *only* changes I've made to the config file:\n>>\n>> shared_buffers = 32GB\n>> wal_buffers = 16MB\n>> checkpoint_segments = 1024\n>\n>\n> Note that these are the only changes that actually impact pgbench results.\n> The test doesn't stress very many parts of the system, such as the query\n> optimizer.\n>\n> Also be aware these values may not be practical to use in production. You\n> can expect bad latency issues due to having shared_buffers so large. All\n> that memory has to be reconciled and written to disk if it's been modified\n> at each checkpoint, and 32GB of such work is a lot. I have systems where we\n> can't make shared_buffers any bigger than 4GB before checkpoint pauses get\n> too bad.\n>\n> Similarly, setting checkpoint_segments to 1024 means that you might go\n> through 16GB of writes before a checkpoint happens. That's great for\n> average performance...but when that checkpoint does hit, you're facing a\n> large random I/O backlog.\n\nI thought the bgwriter mitigated most of the problems here? Often I'll\nsee the actual checkpoints with 'sync' times typically below a few\nseconds (when there is anything to do at all). I can't say I've seen\ncheckpoint pauses in my workloads.\n\n> There's not much you can do about all this on the Linux side. If you drop\n> the dirty_* parameters too much, maintenance operations like VACUUM start to\n> get slow. Really all you can do is avoid setting shared_buffers and\n> checkpoint_segments too high, so the checkpoint backlog never gets gigantic.\n> The tuning you've done is using higher values than we normally recommend\n> because it's not quite practical to deploy like that. That and the very\n> small database are probably why your numbers are so high.\n\nMostly I do data warehouse type of workloads with very little (if any)\ndata modification after initial load time. Extensive benchmarking of\nthe actual applications involved has shown that - for me - a large\n(but not too large) shared_buffers (32GB is right about the sweet spot\nfor me, perhaps a bit on the high side) works well. Additionally, the\nlarge checkpoint_segments value really appears to help as well (again,\nthis is very workload dependent).\n\n>> Note: I did get better results with HT on vs. with HT off, so I've\n>> left HT on for now.\n>\n>\n> pgbench select-only in particular does like hyper-threading. We get\n> occasional reports of more memory-bound workloads actually slowing when it's\n> turned on. I think it's a wash and leave it on. Purchasing and management\n> people tend to get annoyed if they discover the core count of the server is\n> half what they thought they were buying. The potential downside of HT isn't\n> so big that its worth opening that can of worms, unless you've run real\n> application level tests to prove it hurts.\n\nGlad to get an \"it's a wash\" confirmation here.\n\n\n-- \nJon\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Mar 2013 09:47:06 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sniff test on some PG 8.4 numbers"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm encountering very poor query plans when joining against a union,\nwhere one branch of the union is itself a join: specifically, Postgres\ndoes the entire inside join and then filters the result, rather than\npushing the filters down to the joined tables. I've provided a\nstandalone test case below, including some variations that don't show\nthe problem for comparison. The fourth query is the problematic one -\nI would have expected the Append portion of the plan to be essentially\nthe same as the one in the second query.\n\nI'm running Postgres 9.2.3 on Oracle Enterprise Linux 5 (some machines\n5.3, some 5.6) x86_64, installed using the RPMs from\nhttp://yum.pgrpms.org/. The problem occurs with identical results\n(other than raw speed) on both a rather underpowered test server with\ndefault Postgres settings and a much better live server which I've\nattempted to tune somewhat sensibly.\n\nThanks for any advice.\n\nSTART TRANSACTION;\n\nCREATE TABLE bundle_contents(\n item_id INTEGER PRIMARY KEY,\n bundle_type INTEGER NOT NULL\n);\nINSERT INTO bundle_contents(item_id, bundle_type)\n SELECT i * 10 + j, i\n FROM GENERATE_SERIES(1, 100) i,\n GENERATE_SERIES(1, 10) j;\nCREATE INDEX ON bundle_contents(bundle_type, item_id);\nANALYSE bundle_contents;\n\nCREATE TABLE bundle(\n bundle_id INTEGER PRIMARY KEY,\n bundle_type INTEGER NOT NULL\n);\nINSERT INTO bundle(bundle_id, bundle_type)\n SELECT i * 1000 + j, i\n FROM GENERATE_SERIES(1, 100) i,\n GENERATE_SERIES(1, 1000) j;\nCREATE INDEX ON bundle(bundle_type, bundle_id);\nANALYSE bundle;\n\nCREATE VIEW bundled_item AS\n SELECT bundle_id AS item_id_a, item_id AS item_id_b\n FROM bundle NATURAL JOIN bundle_contents;\n\nCREATE TABLE unbundled_item(\n item_id_a INTEGER,\n item_id_b INTEGER,\n PRIMARY KEY (item_id_a, item_id_b)\n);\nINSERT INTO unbundled_item(item_id_a, item_id_b)\n SELECT i, 1 FROM GENERATE_SERIES(1001, 100000, 10) i;\nINSERT INTO unbundled_item(item_id_a, item_id_b)\n SELECT i, 2 FROM GENERATE_SERIES(1001, 100000, 20) i;\nINSERT INTO unbundled_item(item_id_a, item_id_b)\n SELECT i, 3 FROM GENERATE_SERIES(1001, 100000, 23) i;\nINSERT INTO unbundled_item(item_id_a, item_id_b)\n SELECT i, 4 FROM GENERATE_SERIES(1001, 100000, 41) i;\nCREATE INDEX ON unbundled_item(item_id_b);\nANALYSE unbundled_item;\n\nCREATE VIEW item AS\n SELECT item_id_a, item_id_b FROM bundled_item\n UNION ALL\n SELECT item_id_a, item_id_b FROM unbundled_item;\n\nCREATE TABLE item_reference(\n reference_id INTEGER PRIMARY KEY,\n item_id_a INTEGER NOT NULL,\n item_id_b INTEGER NOT NULL\n);\nINSERT INTO item_reference(reference_id, item_id_a, item_id_b)\n VALUES(1, 1472, 16),\n (2, 1299, 3);\nCREATE INDEX ON item_reference(item_id_a, item_id_b);\nCREATE INDEX ON item_reference(item_id_b);\nANALYSE item_reference;\n\n-- no union, nice and fast\nEXPLAIN (ANALYSE, BUFFERS)\n SELECT *\n FROM bundled_item\n WHERE (item_id_a, item_id_b) = (1472, 16);\n/* http://explain.depesz.com/s/1ye\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..16.56 rows=1 width=8) (actual\ntime=0.040..0.041 rows=1 loops=1)\n Join Filter: (bundle.bundle_type = bundle_contents.bundle_type)\n Buffers: shared hit=8\n -> Index Scan using bundle_pkey on bundle (cost=0.00..8.28 rows=1\nwidth=8) (actual time=0.017..0.017 rows=1 loops=1)\n Index Cond: (bundle_id = 1472)\n Buffers: shared hit=4\n -> Index Scan using bundle_contents_pkey on bundle_contents\n(cost=0.00..8.27 rows=1 width=8) (actual time=0.012..0.013 rows=1\nloops=1)\n Index Cond: (item_id = 16)\n Buffers: shared hit=4\n Total runtime: 0.085 ms\n(10 rows)\n*/\n\n-- using the union, but querying it directly rather\n-- than joining to it, still fast\nEXPLAIN (ANALYSE, BUFFERS)\n SELECT *\n FROM item\n WHERE (item_id_a, item_id_b) = (1472, 16);\n/* http://explain.depesz.com/s/rwA\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..24.84 rows=2 width=8) (actual time=0.017..0.068\nrows=1 loops=1)\n Buffers: shared hit=6 read=3\n I/O Timings: read=0.032\n -> Append (cost=0.00..24.84 rows=2 width=8) (actual\ntime=0.015..0.066 rows=1 loops=1)\n Buffers: shared hit=6 read=3\n I/O Timings: read=0.032\n -> Nested Loop (cost=0.00..16.56 rows=1 width=8) (actual\ntime=0.015..0.017 rows=1 loops=1)\n Join Filter: (bundle.bundle_type = bundle_contents.bundle_type)\n Buffers: shared hit=6\n -> Index Scan using bundle_pkey on bundle\n(cost=0.00..8.28 rows=1 width=8) (actual time=0.006..0.007 rows=1\nloops=1)\n Index Cond: (bundle_id = 1472)\n Buffers: shared hit=3\n -> Index Scan using bundle_contents_pkey on\nbundle_contents (cost=0.00..8.27 rows=1 width=8) (actual\ntime=0.004..0.005 rows=1 loops=1)\n Index Cond: (item_id = 16)\n Buffers: shared hit=3\n -> Index Scan using unbundled_item_item_id_b_idx on\nunbundled_item (cost=0.00..8.27 rows=1 width=8) (actual\ntime=0.049..0.049 rows=0 loops=1)\n Index Cond: (item_id_b = 16)\n Filter: (item_id_a = 1472)\n Buffers: shared read=3\n I/O Timings: read=0.032\n Total runtime: 0.116 ms\n(21 rows)\n*/\n\n-- join but no union, still fast\nEXPLAIN (ANALYSE, BUFFERS)\n SELECT *\n FROM bundled_item\n NATURAL JOIN item_reference\n WHERE reference_id = 1;\n/* http://explain.depesz.com/s/oOy\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1.04..4.50 rows=1 width=12) (actual\ntime=0.092..0.096 rows=1 loops=1)\n Buffers: shared hit=5 read=3\n I/O Timings: read=0.038\n -> Merge Join (cost=1.04..1.33 rows=1 width=16) (actual\ntime=0.032..0.035 rows=1 loops=1)\n Merge Cond: (bundle_contents.item_id = item_reference.item_id_b)\n Buffers: shared hit=4\n -> Index Scan using bundle_contents_pkey on bundle_contents\n(cost=0.00..43.25 rows=1000 width=8) (actual time=0.010..0.013 rows=7\nloops=1)\n Buffers: shared hit=3\n -> Sort (cost=1.03..1.04 rows=1 width=12) (actual\ntime=0.012..0.012 rows=1 loops=1)\n Sort Key: item_reference.item_id_b\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=1\n -> Seq Scan on item_reference (cost=0.00..1.02 rows=1\nwidth=12) (actual time=0.005..0.007 rows=1 loops=1)\n Filter: (reference_id = 1)\n Rows Removed by Filter: 1\n Buffers: shared hit=1\n -> Index Only Scan using bundle_bundle_type_bundle_id_idx on\nbundle (cost=0.00..3.16 rows=1 width=8) (actual time=0.056..0.057\nrows=1 loops=1)\n Index Cond: ((bundle_type = bundle_contents.bundle_type) AND\n(bundle_id = item_reference.item_id_a))\n Heap Fetches: 1\n Buffers: shared hit=1 read=3\n I/O Timings: read=0.038\n Total runtime: 0.139 ms\n(22 rows)\n*/\n\n-- join to the union, slow. the conditions are pushed into the\n-- index scan in the non-join branch of the union, but in the join\n-- branch they're applied after the entire join is built\nEXPLAIN (ANALYSE, BUFFERS)\n SELECT *\n FROM item\n NATURAL JOIN item_reference\n WHERE reference_id = 1;\n/* http://explain.depesz.com/s/M73\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=27.50..27979.82 rows=26 width=12) (actual\ntime=3.230..554.473 rows=1 loops=1)\n Buffers: shared hit=452\n -> Seq Scan on item_reference (cost=0.00..1.02 rows=1 width=12)\n(actual time=0.013..0.016 rows=1 loops=1)\n Filter: (reference_id = 1)\n Rows Removed by Filter: 1\n Buffers: shared hit=1\n -> Append (cost=27.50..27978.78 rows=2 width=8) (actual\ntime=3.214..554.454 rows=1 loops=1)\n Buffers: shared hit=451\n -> Subquery Scan on \"*SELECT* 1\" (cost=27.50..27970.50\nrows=1 width=8) (actual time=3.212..554.422 rows=1 loops=1)\n Filter: ((item_reference.item_id_a = \"*SELECT*\n1\".item_id_a) AND (item_reference.item_id_b = \"*SELECT* 1\".item_id_b))\n Rows Removed by Filter: 999999\n Buffers: shared hit=448\n -> Hash Join (cost=27.50..12970.50 rows=1000000\nwidth=8) (actual time=0.617..344.127 rows=1000000 loops=1)\n Hash Cond: (bundle.bundle_type =\nbundle_contents.bundle_type)\n Buffers: shared hit=448\n -> Seq Scan on bundle (cost=0.00..1443.00\nrows=100000 width=8) (actual time=0.009..22.066 rows=100000 loops=1)\n Buffers: shared hit=443\n -> Hash (cost=15.00..15.00 rows=1000 width=8)\n(actual time=0.594..0.594 rows=1000 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 40kB\n Buffers: shared hit=5\n -> Seq Scan on bundle_contents\n(cost=0.00..15.00 rows=1000 width=8) (actual time=0.008..0.207\nrows=1000 loops=1)\n Buffers: shared hit=5\n -> Index Only Scan using unbundled_item_pkey on\nunbundled_item (cost=0.00..8.28 rows=1 width=8) (actual\ntime=0.021..0.021 rows=0 loops=1)\n Index Cond: ((item_id_a = item_reference.item_id_a) AND\n(item_id_b = item_reference.item_id_b))\n Heap Fetches: 0\n Buffers: shared hit=3\n Total runtime: 554.533 ms\n(27 rows)\n*/\n\nROLLBACK;\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 6 Mar 2013 14:54:04 +0000",
"msg_from": "David Leverton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor plan when joining against a union containing a join"
},
{
"msg_contents": "On 03/06/2013 06:54 AM, David Leverton wrote:\n> Hi all,\n> \n> I'm encountering very poor query plans when joining against a union,\n> where one branch of the union is itself a join: specifically, Postgres\n> does the entire inside join and then filters the result, rather than\n> pushing the filters down to the joined tables. I've provided a\n> standalone test case below, including some variations that don't show\n> the problem for comparison. The fourth query is the problematic one -\n> I would have expected the Append portion of the plan to be essentially\n> the same as the one in the second query.\n\nThanks for the test case!\n\nActually, in case #4, Postgres *is* pushing down the join qual into the\nsegments of the Union. It's just that that's not helping performance\nany; it's causing a really slow join on bundle, which is actually where\nyou're spending most of your time:\n\n -> Hash Join (cost=27.50..12970.50 rows=1000000\nwidth=8) (actual time=0.617..344.127 rows=1000000 loops=1)\n Hash Cond: (bundle.bundle_type =\nbundle_contents.bundle_type)\n Buffers: shared hit=448\n -> Seq Scan on bundle (cost=0.00..1443.00\nrows=100000 width=8) (actual time=0.009..22.066 rows=100000 loops=1)\n Buffers: shared hit=443\n -> Hash (cost=15.00..15.00 rows=1000 width=8)\n(actual time=0.594..0.594 rows=1000 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 40kB\n Buffers: shared hit=5\n -> Seq Scan on bundle_contents\n(cost=0.00..15.00 rows=1000 width=8) (actual time=0.008..0.207\nrows=1000 loops=1)\n Buffers: shared hit=5\n\nClearly this is the wrong strategy; Postgres should be letting the\nfilter on item_reference be the driver instead of hashing the whole\nbundle + bundle_contents join. I suspect that the qual pushdown into\nthe union is hitting an inability to transverse multiple joins, which\nwouldn't surprise me; in fact, I'd be surprised if it could do so.\n\nOn a pragmatic basis, joining against complex UNION expressions is\nliable to be a bit of a minefield for the next few generations of the\nPostgres planner; it's just really hard to optimize. You might think of\nusing outer joins instead of a UNION.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 06 Mar 2013 15:15:08 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor plan when joining against a union containing a\n join"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> On 03/06/2013 06:54 AM, David Leverton wrote:\n>> I'm encountering very poor query plans when joining against a union,\n\n> Actually, in case #4, Postgres *is* pushing down the join qual into the\n> segments of the Union.\n\nYeah, but not further. I believe the core issue here (as of 9.2) is\nthat we're not willing to generate parameterized paths for subquery\nrelations. We could do that without a huge amount of new code,\nI think, but the scary thing is how much time it might take to generate\n(and then discard most of the) plans for assorted parameterizations of\ncomplicated subqueries.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 07 Mar 2013 00:52:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor plan when joining against a union containing a join"
},
{
"msg_contents": "On 7 March 2013 05:52, Tom Lane <[email protected]> wrote:\n> Josh Berkus <[email protected]> writes:\n>> On 03/06/2013 06:54 AM, David Leverton wrote:\n>>> I'm encountering very poor query plans when joining against a union,\n>\n>> Actually, in case #4, Postgres *is* pushing down the join qual into the\n>> segments of the Union.\n>\n> Yeah, but not further. I believe the core issue here (as of 9.2) is\n> that we're not willing to generate parameterized paths for subquery\n> relations. We could do that without a huge amount of new code,\n> I think, but the scary thing is how much time it might take to generate\n> (and then discard most of the) plans for assorted parameterizations of\n> complicated subqueries.\n\nThanks for looking at this, both of you.\n\nDoes \"as of 9.2\" mean it's better in 9.3? I do intend to upgrade once\nit's released, so if it can handle this better (or if there's anything\nthat can be done to improve it between now and then without making\nother things worse) that would be great. Otherwise, I'm wondering if\nthe addition of LATERAL will help persuade the planner to do what I\nwant, something like this, perhaps? (please excuse any syntax\nmisunderstandings):\n\n SELECT *\n FROM item_reference,\n LATERAL (\n SELECT *\n FROM item\n WHERE (item.item_id_a, item.item_id_b)\n = (item_reference.item_id_a, item_reference.item_id_b)\n ) item\n WHERE reference_id = 1;\n\nI'm hoping this might help as the query in the test case where the\ndesired item_id_a and item_id_b were supplied literally rather than\nfrom a join was fast, and this version has a similar structure,\nalthough naturally it'll only work if the planner doesn't notice that\nit's really equivalent to the slow version and treat it the same way.\n\nIf not though, and in the meantime in any case, I suppose I'm looking\nfor a workaround. In the real application the queries involved are\ngenerated by code rather than hand-written, so it's not a disaster if\nthey have to be uglified a bit more than they are already. I'll see\nif I can figure something out, but if anyone has any suggestions they\nwould be much appreciated.\n\nI'm afraid I don't really see how Josh's outer join suggestion would\nhelp here, though, unless it was more of a general principle than\nsomething specific to this case. The two branches of the union don't\nhave any tables in common, so I don't see what I could be joining to.\n\nIdeally any alternative would keep the semantics the same as the\nexisting version, or at least as similar as possible, as the\napplication does need (or at least very much wants) to be able to work\nwith items, including using them in further joins, without caring\nwhether they're loose or part of a bundle. (And yes, it is a rather\nscary design in places, but it's the best thing I could come up with\nto achieve the requirements. Not sure if that says more about the\nrequirements or me....)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 7 Mar 2013 18:21:52 +0000",
"msg_from": "David Leverton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor plan when joining against a union containing a join"
},
{
"msg_contents": "David Leverton <[email protected]> writes:\n> On 7 March 2013 05:52, Tom Lane <[email protected]> wrote:\n>> Josh Berkus <[email protected]> writes:\n>>> Actually, in case #4, Postgres *is* pushing down the join qual into the\n>>> segments of the Union.\n\n>> Yeah, but not further. I believe the core issue here (as of 9.2) is\n>> that we're not willing to generate parameterized paths for subquery\n>> relations. We could do that without a huge amount of new code,\n>> I think, but the scary thing is how much time it might take to generate\n>> (and then discard most of the) plans for assorted parameterizations of\n>> complicated subqueries.\n\n> Thanks for looking at this, both of you.\n\n> Does \"as of 9.2\" mean it's better in 9.3?\n\nNo, I meant it was worse before 9.2 --- previous versions weren't even\ntheoretically capable of generating the plan shape you want. What\nyou're after is for the sub-join to be treated as a parameterized\nsub-plan, and we did not have any ability to do that for anything more\ncomplicated than a single-relation scan.\n\n> I do intend to upgrade once\n> it's released, so if it can handle this better (or if there's anything\n> that can be done to improve it between now and then without making\n> other things worse) that would be great. Otherwise, I'm wondering if\n> the addition of LATERAL will help persuade the planner to do what I\n> want, something like this, perhaps?\n\nGood idea, but no such luck in that form: it's still not going to try to\npush the parameterization down into the sub-query. I think you'd have\nto write out the query with the views expanded and manually put the\nWHERE restrictions into the lowest join level. [ experiments... ]\nLooks like only the UNION view has to be manually expanded to get a\ngood plan with HEAD:\n\nregression=# explain SELECT *\n FROM item_reference,\n LATERAL (\n SELECT item_id_a, item_id_b FROM bundled_item WHERE (item_id_a, item_id_b)\n = (item_reference.item_id_a, item_reference.item_id_b)\n UNION ALL\n SELECT item_id_a, item_id_b FROM unbundled_item WHERE (item_id_a, item_id_b)\n = (item_reference.item_id_a, item_reference.item_id_b)\n ) item\n WHERE reference_id = 1;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.57..25.99 rows=2 width=20)\n -> Seq Scan on item_reference (cost=0.00..1.02 rows=1 width=12)\n Filter: (reference_id = 1)\n -> Append (cost=0.57..24.94 rows=2 width=8)\n -> Nested Loop (cost=0.57..16.61 rows=1 width=8)\n Join Filter: (bundle.bundle_type = bundle_contents.bundle_type)\n -> Index Scan using bundle_pkey on bundle (cost=0.29..8.31 rows=1 width=8)\n Index Cond: (bundle_id = item_reference.item_id_a)\n -> Index Scan using bundle_contents_pkey on bundle_contents (cost=0.28..8.29 rows=1 width=8)\n Index Cond: (item_id = item_reference.item_id_b)\n -> Index Only Scan using unbundled_item_pkey on unbundled_item (cost=0.29..8.31 rows=1 width=8)\n Index Cond: ((item_id_a = item_reference.item_id_a) AND (item_id_b = item_reference.item_id_b))\n(12 rows)\n\n\nYou might be able to accomplish something similar without LATERAL, if\nyou're willing to give up the notational convenience of the views.\nDon't have time right now to experiment further though.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 07 Mar 2013 13:47:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor plan when joining against a union containing a join"
},
{
"msg_contents": "On 7 March 2013 18:47, Tom Lane <[email protected]> wrote:\n> Good idea, but no such luck in that form: it's still not going to try to\n> push the parameterization down into the sub-query. I think you'd have\n> to write out the query with the views expanded and manually put the\n> WHERE restrictions into the lowest join level. [ experiments... ]\n> Looks like only the UNION view has to be manually expanded to get a\n> good plan with HEAD:\n\nThanks for checking that, good to know that there's a way forward with\n9.3 even in the worst case of not finding anything for 9.2.\n\n> You might be able to accomplish something similar without LATERAL, if\n> you're willing to give up the notational convenience of the views.\n> Don't have time right now to experiment further though.\n\nNo problem, I don't expect you do to everything for me ;-) (although\nif you do find the time to come up with something before I do, that\nwould of course be very welcome) and there's no desperate rush in any\ncase.\n\nPondering your earlier comment some more:\n\nOn 7 March 2013 05:52, Tom Lane <[email protected]> wrote:\n> I believe the core issue here (as of 9.2) is\n> that we're not willing to generate parameterized paths for subquery\n> relations. We could do that without a huge amount of new code,\n> I think, but the scary thing is how much time it might take to generate\n> (and then discard most of the) plans for assorted parameterizations of\n> complicated subqueries.\n\nWould it be reasonable to support this with some sort of configurable\ncomplexity threshold for the subquery, above which the planner won't\nbother? Probably not the most elegant solution, but maybe something\nto consider. It seems similar in spirit to from_collapse_limit and\njoin_collapse_limit, in the sense of controlling how much effort to\nput in for complex queries.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Mar 2013 13:49:56 +0000",
"msg_from": "David Leverton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor plan when joining against a union containing a join"
}
] |
[
{
"msg_contents": "Considering this list is where I first learned of the Intel 320 drives (AFAIK, the only non-enterprise SSDs that are power-failure safe), I thought I'd see if any of the folks here that tend to test new stuff have got their hands on these yet.\n\nI had no idea these drives were out (but they still are a bit pricey, but cheaper than any spinning drives that would give the same sort of random IO performance), and while trying to find a place to source some spare 300GB 320s, I found this review:\n\nhttp://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review\n\nOf most interest to me was this:\n\n\"Along one edge of the drive Intel uses two 35V 47µF capacitors, enough to allow the controller to commit any data (and most non-data) to NAND in the event of a power failure. The capacitors in the S3700 are periodically tested by the controller. In the event that they fail, the controller disables all write buffering and throws a SMART error flag.\"\n\nThis is also the first new Intel drive in a long time to use an Intel controller rather than a SandForce (which frankly, I don't trust).\n\nAnyone have any benchmarks to share?\n\nAre there any other sub-$1K drives out there currently that incorporate power loss protection like this and the 320s do?\n\nThanks,\n\nCharles\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 8 Mar 2013 01:48:42 -0500",
"msg_from": "CSS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Anyone running Intel S3700 SSDs?"
},
{
"msg_contents": "On 2013-03-08 07:48, CSS wrote:\n>\n> http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review\n>\n> Of most interest to me was this:\n>\n>\n> Anyone have any benchmarks to share?\n>\nI tested the 800GB disk with diskchecker.pl and it was ok. No benchmarks \nto share yet, as I'm waiting for the disks to be installed in a server.\n\n-- \nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 08 Mar 2013 09:51:22 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Anyone running Intel S3700 SSDs?"
}
] |
[
{
"msg_contents": "Hi,\n\nI upgraded our master database server from 9.2.2 to 9.2.3 on Monday. We\nhave been experiencing performance problems since then. Yesterday, our\napplication hit the connection limit 5 times. It causes approximately\n15 seconds of downtime. The database server hit 50 load average, then\neverything came back to normal.\n\nWe have a very good database server dedicated to PostgreSQL. It has 64\ncores and 200 GiB of RAM which is 2 times bigger than our database.\nWe run PostgreSQL on RHEL relase 6.2. The database executes 2k transactions\nper second in busy hours. The server is running 1 - 2 load average\nnormally.\n\nPostgreSQL writes several following logs during the problem which I never\nsaw before 9.2.3:\n\nLOG: process 4793 acquired ExclusiveLock on extension of relation 305605 \nof database 16396 after 2348.675 ms\n\nThe relation 305605 was the biggest table of the database. Our application\nstores web service logs as XML's on that table. It is only used to insert\nnew rows. One row is approximately 2 MB and 50 rows inserted per second\nat most busy times. We saw autovacuum processes during the problem. We\ndisabled autovacuum for that table but is does not help. I tried to archive\nthe table. Create a new empty one, but it does not help, too.\n\nWe also have an unlogged table to used by our application for locking.\nIt is autovacuumed every 5 minutes as new rows are inserted and deleted\ncontinuously.\n\nMost of our configuration parameters remain default except the following:\n\nmax_connections = 200\nshared_buffers = 64GB\nmax_prepared_transactions = 0\nwork_mem = 64MB\nmaintenance_work_mem = 512MB\nshared_preload_libraries = '$libdir/pg_stat_statements'\nwal_level = hot_standby\ncheckpoint_segments = 40\neffective_cache_size = 128GB\ntrack_activity_query_size = 8192\nautovacuum = on\nautovacuum_max_workers = 10\n\nI will try to reduce autovacuum_max_workers and increase max_connections\nto avoid downtime. Do you have any other suggestions? Do you know what \nmight\nhave caused this problem? Do you think downgrading to 9.2.2 is a good idea?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 08 Mar 2013 13:27:16 +0200",
"msg_from": "\"Emre Hasegeli\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 9.2.3 performance problem caused Exclusive locks"
},
{
"msg_contents": "On Fri, 08 Mar 2013 20:39:45 +0200, AI Rumman <[email protected]> wrote:\n\n> Knowing your problem, I read the docs and found that :\n> *\n> *\n>\n> *Fix performance problems with autovacuum truncation in busy workloads \n> (Jan\n> Wieck)*\n>\n> *Truncation of empty pages at the end of a table requires exclusive lock,\n> but autovacuum was coded to fail (and release the table lock) when there\n> are conflicting lock requests. Under load, it is easily possible that\n> truncation would never occur, resulting in table bloat. Fix by \n> performing a\n> partial truncation, releasing the lock, then attempting to re-acquire the\n> lock and \n> continue<http://www.postgresql.org/docs/9.2/static/release-9-2-3.html#>.\n> This fix also greatly reduces the average time before autovacuum releases\n> the lock after a conflicting request arrives.*\n>\n> This could be a reason of your locking.\n\nYes, I saw this. It is commit b19e4250b45e91c9cbdd18d35ea6391ab5961c8d by\nJan Wieck. He also seems worried in the commit message about this patch. Do\nyou think this is the exact reason of the problem?\n\nI have downgraded to 9.2.2, decreased the autovacuum_max_workers to 2 from\n10 and increase max_connections to 500 from 200 in the mean time. There are\nnot any ExclusiveLock's since then.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 08 Mar 2013 21:56:23 +0200",
"msg_from": "\"Emre Hasegeli\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.2.3 performance problem caused Exclusive\n locks"
},
{
"msg_contents": "2013-03-08 13:27:16 +0200 Emre Hasegeli <[email protected]>:\n\n> PostgreSQL writes several following logs during the problem which I never\n> saw before 9.2.3:\n>\n> LOG: process 4793 acquired ExclusiveLock on extension of relation \n> 305605 of database 16396 after 2348.675 ms\n\nI tried\n\n* to downgrade to 9.2.2\n* to disable autovacuum\n* to disable synchronous commit\n* to write less on the big tables\n* to increase checkpoint segments\n* to increase max connections\n* to move pg_xlog to sepe\n\nNone of them helps to avoid downtimes. I could not find anything related\nto it? Do you have any idea? Have you ever experience something like this?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Mar 2013 21:17:33 +0200",
"msg_from": "\"Emre Hasegeli\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.2.3 performance problem caused Exclusive locks"
},
{
"msg_contents": "Emre,\n\n> > LOG: process 4793 acquired ExclusiveLock on extension of relation\n> > 305605 of database 16396 after 2348.675 ms\n\nThe reason you're seeing that message is that you have log_lock_waits turned on.\n\nThat message says that some process waited for 2.3 seconds to get a lock for expanding the size of relation 16396/305605, which is most likely an index. This is most likely due to changes in your application, or an increase in concurrent write activity.\n\n--Josh\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 16:04:58 -0500 (CDT)",
"msg_from": "Joshua Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.2.3 performance problem caused Exclusive\n locks"
},
{
"msg_contents": "On Friday, March 8, 2013, Emre Hasegeli wrote:\n\nPostgreSQL writes several following logs during the problem which I never\n> saw before 9.2.3:\n>\n> LOG: process 4793 acquired ExclusiveLock on extension of relation 305605\n> of database 16396 after 2348.675 ms\n>\n\nThe key here is not that it is an ExclusiveLock, but rather than it is the\nrelation extension lock. I don't think the extension lock is ever held\nacross user-code, or transaction boundaries, or anything like that. It is\nheld over some small IOs. So if it blocked on that for over 2 seconds, you\nalmost surely have some serious IO congestion.\n\nAnd this particular message is probably more a symptom of that congestion\nthan anything else.\n\nYou said you rolled back to 9.2.2 and the stalling is still there. Are you\nstill seeing the log message, or are you now seeing silently stalls? Did\nyou roll back all other changes that were made at the same time as the\nupgrade to 9.2.3 (kernel versions, filesystem changes/versions, etc.)?\n\nCheers,\n\nJeff\n\nOn Friday, March 8, 2013, Emre Hasegeli wrote:\nPostgreSQL writes several following logs during the problem which I never\nsaw before 9.2.3:\n\nLOG: process 4793 acquired ExclusiveLock on extension of relation 305605 of database 16396 after 2348.675 msThe key here is not that it is an ExclusiveLock, but rather than it is the relation extension lock. I don't think the extension lock is ever held across user-code, or transaction boundaries, or anything like that. It is held over some small IOs. So if it blocked on that for over 2 seconds, you almost surely have some serious IO congestion.\nAnd this particular message is probably more a symptom of that congestion than anything else.You said you rolled back to 9.2.2 and the stalling is still there. Are you still seeing the log message, or are you now seeing silently stalls? Did you roll back all other changes that were made at the same time as the upgrade to 9.2.3 (kernel versions, filesystem changes/versions, etc.)?\nCheers,Jeff",
"msg_date": "Wed, 13 Mar 2013 21:53:55 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "PostgreSQL 9.2.3 performance problem caused Exclusive locks"
},
{
"msg_contents": "On Thu, 14 Mar 2013 06:53:55 +0200, Jeff Janes <[email protected]> \nwrote:\n\n> On Friday, March 8, 2013, Emre Hasegeli wrote:\n>\n> PostgreSQL writes several following logs during the problem which I never\n>> saw before 9.2.3:\n>>\n>> LOG: process 4793 acquired ExclusiveLock on extension of relation \n>> 305605\n>> of database 16396 after 2348.675 ms\n>>\n>\n> The key here is not that it is an ExclusiveLock, but rather than it is \n> the\n> relation extension lock. I don't think the extension lock is ever held\n> across user-code, or transaction boundaries, or anything like that. It \n> is\n> held over some small IOs. So if it blocked on that for over 2 seconds, \n> you\n> almost surely have some serious IO congestion.\n>\n> And this particular message is probably more a symptom of that congestion\n> than anything else.\n>\n> You said you rolled back to 9.2.2 and the stalling is still there. Are \n> you\n> still seeing the log message, or are you now seeing silently stalls? Did\n> you roll back all other changes that were made at the same time as the\n> upgrade to 9.2.3 (kernel versions, filesystem changes/versions, etc.)?\n\nI did not try with different kernel or file system. It was not because of\n9.2.3, same problem occurred in both 9.2.2 and 9.2.3. Increasing max\nconnections make it worse. It lasts almost 15 minutes in the last time.\n\nThere were not much disk utilization while it is happening, \"top\" was\npointing out most of the CPU usage on the %sy column, there were no IO \nwait.\nI saw \"allocstalls\" increasing on \"atop\". There were a lot of slow insert\nstatements in the logs except ExclusiveLock waits.\n\nWe were using 64 GiB of shared buffers. RhodiumToad suggested to reduce it\non the IRC channel. It did not happen since then.\n\nIt was a real problem for us. I could not find anything related to it. I\ncannot let it happen again on the production environment but I would be\nhappy to share more experience, if it would help you fix it.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Mar 2013 19:07:28 +0200",
"msg_from": "\"Emre Hasegeli\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.2.3 performance problem caused Exclusive\n locks"
},
{
"msg_contents": "I am having the same exact problems. I reduced shared buffers as that seems\nto have done the trick for now in this thread. If things improve I'll post\nback and confirm.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/PostgreSQL-9-2-3-performance-problem-caused-Exclusive-locks-tp5747909p5756113.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 18 May 2013 21:40:04 -0700 (PDT)",
"msg_from": "jonranes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.2.3 performance problem caused Exclusive locks"
}
] |
[
{
"msg_contents": "Greetings,\n\n\n\nI have a large table (~90 million rows) containing vessel positions. In\naddition to a column that contains the location information (the_geom), the\ntable also contains two columns that are used to uniquely identify the\nvessel (mmsi and name) and a column containing the Unix time (epoch) at\nwhich the position information was logged. I frequently need to assign\nrecords to vessel transits. To do this, I currently create a CTE that uses\na Window function (partitioning the data by mmsi and name ordered by epoch)\nto examine the time that has elapsed between successive position reports\nfor individual vessels. For every position record for a vessel (as\nidentified using mmsi and name), if the time elapsed between the current\nposition record and the previous record (using the lag function) is less\nthan or equal to 2 hours, I assign the record a value of 0 to a CTE column\nnamed tr_index. If the time elapsed is greater than 2 hours, I assign the\nrecord a value of 1 to the tr_index column. I then use the CTE to generate\ntransit numbers by summing the values in the tr_index field across a Window\nthat also partitions the data by mmsi and name and is ordered by epoch.\nThis works, but is very slow (hours). The table is indexed (multi-column\nindex on mmsi, name and index on epoch). Does anyone see a way to get what\nI am after in a more efficient manner. What I am after is an assignment of\ntransit number to vessels' position records based on whether the records\nwere within two hours of each other. The SQL that I used is provided below.\nAny advice would be greatly appreciated...\n\n\n\nWITH\n\ncte_01 AS\n\n(\n\nSELECT\n\na.id,\n\na.mmsi,\n\na.name,\n\na.epoch,\n\na.the_geom\n\nCASE\n\n WHEN ((a.epoch - lag(a.epoch) OVER w) / 60) > 120 THEN 1\n\n ELSE 0\n\nEND AS tr_index\n\nFROM table a\n\nWINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n\n)\n\n\n\n\n\nSELECT\n\na.id,\n\na.mmsi,\n\na.name,\n\na.epoch,\n\na.the_geom,\n\n1 + sum(a.tr_index) OVER w AS transit,\n\na.active\n\nFROM cte_01 a\n\nWINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n\n\n-- \nJeff\n\nGreetings,\n \nI have a large table (~90 million rows) containing vessel\npositions. In addition to a column that contains the location information\n(the_geom), the table also contains two columns that are used to uniquely\nidentify the vessel (mmsi and name) and a column containing the Unix time\n(epoch) at which the position information was logged. I frequently need to assign\nrecords to vessel transits. To do this, I currently create a CTE that uses a\nWindow function (partitioning the data by mmsi and name ordered by epoch) to\nexamine the time that has elapsed between successive position reports for\nindividual vessels. For every position record for a vessel (as identified using\nmmsi and name), if the time elapsed between the current position record and the\nprevious record (using the lag function) is less than or equal to 2 hours, I\nassign the record a value of 0 to a CTE column named tr_index. If the time\nelapsed is greater than 2 hours, I assign the record a value of 1 to the\ntr_index column. I then use the CTE to generate transit numbers by summing the\nvalues in the tr_index field across a Window that also partitions the data by\nmmsi and name and is ordered by epoch. This works, but is very slow (hours).\nThe table is indexed (multi-column index on mmsi, name and index on epoch). Does\nanyone see a way to get what I am after in a more efficient manner. What I am\nafter is an assignment of transit number to vessels' position records based on whether\nthe records were within two hours of each other. The SQL that I used is\nprovided below. Any advice would be greatly appreciated...\n \nWITH\ncte_01 AS \n(\nSELECT \na.id, \na.mmsi, \na.name, \na.epoch, \na.the_geom\nCASE\n WHEN ((a.epoch -\nlag(a.epoch) OVER w) / 60) > 120 THEN 1\n ELSE 0\nEND AS tr_index \nFROM table a\nWINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n)\n \n \nSELECT \na.id, \na.mmsi, \na.name, \na.epoch, \na.the_geom,\n1 + sum(a.tr_index) OVER w AS transit, \na.active \nFROM cte_01 a\nWINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n-- Jeff",
"msg_date": "Mon, 11 Mar 2013 10:27:17 -0400",
"msg_from": "Jeff Adams - NOAA Affiliate <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large Table - Slow Window Functions (Better Approach?)"
},
{
"msg_contents": "Hello\n\nyou can try procedural solution - use a cursor over ordered data in\nplpgsql and returns table\n\nRegards\n\nPavel Stehule\n\n2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n> Greetings,\n>\n>\n>\n> I have a large table (~90 million rows) containing vessel positions. In\n> addition to a column that contains the location information (the_geom), the\n> table also contains two columns that are used to uniquely identify the\n> vessel (mmsi and name) and a column containing the Unix time (epoch) at\n> which the position information was logged. I frequently need to assign\n> records to vessel transits. To do this, I currently create a CTE that uses a\n> Window function (partitioning the data by mmsi and name ordered by epoch) to\n> examine the time that has elapsed between successive position reports for\n> individual vessels. For every position record for a vessel (as identified\n> using mmsi and name), if the time elapsed between the current position\n> record and the previous record (using the lag function) is less than or\n> equal to 2 hours, I assign the record a value of 0 to a CTE column named\n> tr_index. If the time elapsed is greater than 2 hours, I assign the record a\n> value of 1 to the tr_index column. I then use the CTE to generate transit\n> numbers by summing the values in the tr_index field across a Window that\n> also partitions the data by mmsi and name and is ordered by epoch. This\n> works, but is very slow (hours). The table is indexed (multi-column index on\n> mmsi, name and index on epoch). Does anyone see a way to get what I am after\n> in a more efficient manner. What I am after is an assignment of transit\n> number to vessels' position records based on whether the records were within\n> two hours of each other. The SQL that I used is provided below. Any advice\n> would be greatly appreciated...\n>\n>\n>\n> WITH\n>\n> cte_01 AS\n>\n> (\n>\n> SELECT\n>\n> a.id,\n>\n> a.mmsi,\n>\n> a.name,\n>\n> a.epoch,\n>\n> a.the_geom\n>\n> CASE\n>\n> WHEN ((a.epoch - lag(a.epoch) OVER w) / 60) > 120 THEN 1\n>\n> ELSE 0\n>\n> END AS tr_index\n>\n> FROM table a\n>\n> WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>\n> )\n>\n>\n>\n>\n>\n> SELECT\n>\n> a.id,\n>\n> a.mmsi,\n>\n> a.name,\n>\n> a.epoch,\n>\n> a.the_geom,\n>\n> 1 + sum(a.tr_index) OVER w AS transit,\n>\n> a.active\n>\n> FROM cte_01 a\n>\n> WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>\n>\n>\n> --\n> Jeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Mar 2013 16:03:38 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large Table - Slow Window Functions (Better Approach?)"
},
{
"msg_contents": "Pavel,\n\nThanks for the response. I have not yet had the opportunity to use cursors,\nbut am now curious. Could you perhaps provide a bit more detail as to what\nthe implementation of your suggested approach would look like?\n\nOn Mon, Mar 11, 2013 at 11:03 AM, Pavel Stehule <[email protected]>wrote:\n\n> Hello\n>\n> you can try procedural solution - use a cursor over ordered data in\n> plpgsql and returns table\n>\n> Regards\n>\n> Pavel Stehule\n>\n> 2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n> > Greetings,\n> >\n> >\n> >\n> > I have a large table (~90 million rows) containing vessel positions. In\n> > addition to a column that contains the location information (the_geom),\n> the\n> > table also contains two columns that are used to uniquely identify the\n> > vessel (mmsi and name) and a column containing the Unix time (epoch) at\n> > which the position information was logged. I frequently need to assign\n> > records to vessel transits. To do this, I currently create a CTE that\n> uses a\n> > Window function (partitioning the data by mmsi and name ordered by\n> epoch) to\n> > examine the time that has elapsed between successive position reports for\n> > individual vessels. For every position record for a vessel (as identified\n> > using mmsi and name), if the time elapsed between the current position\n> > record and the previous record (using the lag function) is less than or\n> > equal to 2 hours, I assign the record a value of 0 to a CTE column named\n> > tr_index. If the time elapsed is greater than 2 hours, I assign the\n> record a\n> > value of 1 to the tr_index column. I then use the CTE to generate transit\n> > numbers by summing the values in the tr_index field across a Window that\n> > also partitions the data by mmsi and name and is ordered by epoch. This\n> > works, but is very slow (hours). The table is indexed (multi-column\n> index on\n> > mmsi, name and index on epoch). Does anyone see a way to get what I am\n> after\n> > in a more efficient manner. What I am after is an assignment of transit\n> > number to vessels' position records based on whether the records were\n> within\n> > two hours of each other. The SQL that I used is provided below. Any\n> advice\n> > would be greatly appreciated...\n> >\n> >\n> >\n> > WITH\n> >\n> > cte_01 AS\n> >\n> > (\n> >\n> > SELECT\n> >\n> > a.id,\n> >\n> > a.mmsi,\n> >\n> > a.name,\n> >\n> > a.epoch,\n> >\n> > a.the_geom\n> >\n> > CASE\n> >\n> > WHEN ((a.epoch - lag(a.epoch) OVER w) / 60) > 120 THEN 1\n> >\n> > ELSE 0\n> >\n> > END AS tr_index\n> >\n> > FROM table a\n> >\n> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n> >\n> > )\n> >\n> >\n> >\n> >\n> >\n> > SELECT\n> >\n> > a.id,\n> >\n> > a.mmsi,\n> >\n> > a.name,\n> >\n> > a.epoch,\n> >\n> > a.the_geom,\n> >\n> > 1 + sum(a.tr_index) OVER w AS transit,\n> >\n> > a.active\n> >\n> > FROM cte_01 a\n> >\n> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n> >\n> >\n> >\n> > --\n> > Jeff\n>\n\n\n\n-- \nJeffrey D. Adams\nContractor\nOAI, Inc.\nIn support of:\nNational Marine Fisheries Service\nOffice of Protected Resources\n1315 East West Hwy, Building SSMC3\nSilver Spring, MD 20910-3282\nphone: (301) 427-8434\nfax: (301) 713-0376\n\nPavel,Thanks for the response. I have not yet had the opportunity to use cursors, but am now curious. Could you perhaps provide a bit more detail as to what the implementation of your suggested approach would look like?\nOn Mon, Mar 11, 2013 at 11:03 AM, Pavel Stehule <[email protected]> wrote:\nHello\n\nyou can try procedural solution - use a cursor over ordered data in\nplpgsql and returns table\n\nRegards\n\nPavel Stehule\n\n2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n> Greetings,\n>\n>\n>\n> I have a large table (~90 million rows) containing vessel positions. In\n> addition to a column that contains the location information (the_geom), the\n> table also contains two columns that are used to uniquely identify the\n> vessel (mmsi and name) and a column containing the Unix time (epoch) at\n> which the position information was logged. I frequently need to assign\n> records to vessel transits. To do this, I currently create a CTE that uses a\n> Window function (partitioning the data by mmsi and name ordered by epoch) to\n> examine the time that has elapsed between successive position reports for\n> individual vessels. For every position record for a vessel (as identified\n> using mmsi and name), if the time elapsed between the current position\n> record and the previous record (using the lag function) is less than or\n> equal to 2 hours, I assign the record a value of 0 to a CTE column named\n> tr_index. If the time elapsed is greater than 2 hours, I assign the record a\n> value of 1 to the tr_index column. I then use the CTE to generate transit\n> numbers by summing the values in the tr_index field across a Window that\n> also partitions the data by mmsi and name and is ordered by epoch. This\n> works, but is very slow (hours). The table is indexed (multi-column index on\n> mmsi, name and index on epoch). Does anyone see a way to get what I am after\n> in a more efficient manner. What I am after is an assignment of transit\n> number to vessels' position records based on whether the records were within\n> two hours of each other. The SQL that I used is provided below. Any advice\n> would be greatly appreciated...\n>\n>\n>\n> WITH\n>\n> cte_01 AS\n>\n> (\n>\n> SELECT\n>\n> a.id,\n>\n> a.mmsi,\n>\n> a.name,\n>\n> a.epoch,\n>\n> a.the_geom\n>\n> CASE\n>\n> WHEN ((a.epoch - lag(a.epoch) OVER w) / 60) > 120 THEN 1\n>\n> ELSE 0\n>\n> END AS tr_index\n>\n> FROM table a\n>\n> WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>\n> )\n>\n>\n>\n>\n>\n> SELECT\n>\n> a.id,\n>\n> a.mmsi,\n>\n> a.name,\n>\n> a.epoch,\n>\n> a.the_geom,\n>\n> 1 + sum(a.tr_index) OVER w AS transit,\n>\n> a.active\n>\n> FROM cte_01 a\n>\n> WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>\n>\n>\n> --\n> Jeff\n-- Jeffrey D. AdamsContractorOAI, Inc.In support of:National Marine Fisheries ServiceOffice of Protected Resources1315 East West Hwy, Building SSMC3\nSilver Spring, MD 20910-3282phone: (301) 427-8434fax: (301) 713-0376",
"msg_date": "Mon, 11 Mar 2013 11:20:07 -0400",
"msg_from": "Jeff Adams - NOAA Affiliate <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large Table - Slow Window Functions (Better Approach?)"
},
{
"msg_contents": "2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n> Pavel,\n>\n> Thanks for the response. I have not yet had the opportunity to use cursors,\n> but am now curious. Could you perhaps provide a bit more detail as to what\n> the implementation of your suggested approach would look like?\n\nan example:\n\n$$\nDECLARE\n r record;\n prev_r record;\n\nBEGIN\n FOR r IN SELECT * FROM a ORDER BY epoch, mmsi\n LOOP\n IF prev_r IS NOT NULL THEN\n /* do some counting */\n prev_r contains previous row, r contains current row\n do some\n RETURN NEXT .. /* return data in defined order */\n END IF;\n prev_r = r;\n END LOOP;\n\n\nProbably slow part of your query is sorting - first can be accelerated\nby index, but second (as CTE result cannot) - you can try increase\nwork_mem ??\n\nRegards\n\nPavel\n\n>\n>\n> On Mon, Mar 11, 2013 at 11:03 AM, Pavel Stehule <[email protected]>\n> wrote:\n>>\n>> Hello\n>>\n>> you can try procedural solution - use a cursor over ordered data in\n>> plpgsql and returns table\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>> 2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n>> > Greetings,\n>> >\n>> >\n>> >\n>> > I have a large table (~90 million rows) containing vessel positions. In\n>> > addition to a column that contains the location information (the_geom),\n>> > the\n>> > table also contains two columns that are used to uniquely identify the\n>> > vessel (mmsi and name) and a column containing the Unix time (epoch) at\n>> > which the position information was logged. I frequently need to assign\n>> > records to vessel transits. To do this, I currently create a CTE that\n>> > uses a\n>> > Window function (partitioning the data by mmsi and name ordered by\n>> > epoch) to\n>> > examine the time that has elapsed between successive position reports\n>> > for\n>> > individual vessels. For every position record for a vessel (as\n>> > identified\n>> > using mmsi and name), if the time elapsed between the current position\n>> > record and the previous record (using the lag function) is less than or\n>> > equal to 2 hours, I assign the record a value of 0 to a CTE column named\n>> > tr_index. If the time elapsed is greater than 2 hours, I assign the\n>> > record a\n>> > value of 1 to the tr_index column. I then use the CTE to generate\n>> > transit\n>> > numbers by summing the values in the tr_index field across a Window that\n>> > also partitions the data by mmsi and name and is ordered by epoch. This\n>> > works, but is very slow (hours). The table is indexed (multi-column\n>> > index on\n>> > mmsi, name and index on epoch). Does anyone see a way to get what I am\n>> > after\n>> > in a more efficient manner. What I am after is an assignment of transit\n>> > number to vessels' position records based on whether the records were\n>> > within\n>> > two hours of each other. The SQL that I used is provided below. Any\n>> > advice\n>> > would be greatly appreciated...\n>> >\n>> >\n>> >\n>> > WITH\n>> >\n>> > cte_01 AS\n>> >\n>> > (\n>> >\n>> > SELECT\n>> >\n>> > a.id,\n>> >\n>> > a.mmsi,\n>> >\n>> > a.name,\n>> >\n>> > a.epoch,\n>> >\n>> > a.the_geom\n>> >\n>> > CASE\n>> >\n>> > WHEN ((a.epoch - lag(a.epoch) OVER w) / 60) > 120 THEN 1\n>> >\n>> > ELSE 0\n>> >\n>> > END AS tr_index\n>> >\n>> > FROM table a\n>> >\n>> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>> >\n>> > )\n>> >\n>> >\n>> >\n>> >\n>> >\n>> > SELECT\n>> >\n>> > a.id,\n>> >\n>> > a.mmsi,\n>> >\n>> > a.name,\n>> >\n>> > a.epoch,\n>> >\n>> > a.the_geom,\n>> >\n>> > 1 + sum(a.tr_index) OVER w AS transit,\n>> >\n>> > a.active\n>> >\n>> > FROM cte_01 a\n>> >\n>> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>> >\n>> >\n>> >\n>> > --\n>> > Jeff\n>\n>\n>\n>\n> --\n> Jeffrey D. Adams\n> Contractor\n> OAI, Inc.\n> In support of:\n> National Marine Fisheries Service\n> Office of Protected Resources\n> 1315 East West Hwy, Building SSMC3\n> Silver Spring, MD 20910-3282\n> phone: (301) 427-8434\n> fax: (301) 713-0376\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Mar 2013 16:34:17 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large Table - Slow Window Functions (Better Approach?)"
},
{
"msg_contents": "Thanks again. The sorting does appear to be the issue. I will test out your\ncursor idea...\n\nOn Mon, Mar 11, 2013 at 11:34 AM, Pavel Stehule <[email protected]>wrote:\n\n> 2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n> > Pavel,\n> >\n> > Thanks for the response. I have not yet had the opportunity to use\n> cursors,\n> > but am now curious. Could you perhaps provide a bit more detail as to\n> what\n> > the implementation of your suggested approach would look like?\n>\n> an example:\n>\n> $$\n> DECLARE\n> r record;\n> prev_r record;\n>\n> BEGIN\n> FOR r IN SELECT * FROM a ORDER BY epoch, mmsi\n> LOOP\n> IF prev_r IS NOT NULL THEN\n> /* do some counting */\n> prev_r contains previous row, r contains current row\n> do some\n> RETURN NEXT .. /* return data in defined order */\n> END IF;\n> prev_r = r;\n> END LOOP;\n>\n>\n> Probably slow part of your query is sorting - first can be accelerated\n> by index, but second (as CTE result cannot) - you can try increase\n> work_mem ??\n>\n> Regards\n>\n> Pavel\n>\n> >\n> >\n> > On Mon, Mar 11, 2013 at 11:03 AM, Pavel Stehule <[email protected]\n> >\n> > wrote:\n> >>\n> >> Hello\n> >>\n> >> you can try procedural solution - use a cursor over ordered data in\n> >> plpgsql and returns table\n> >>\n> >> Regards\n> >>\n> >> Pavel Stehule\n> >>\n> >> 2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n> >> > Greetings,\n> >> >\n> >> >\n> >> >\n> >> > I have a large table (~90 million rows) containing vessel positions.\n> In\n> >> > addition to a column that contains the location information\n> (the_geom),\n> >> > the\n> >> > table also contains two columns that are used to uniquely identify the\n> >> > vessel (mmsi and name) and a column containing the Unix time (epoch)\n> at\n> >> > which the position information was logged. I frequently need to assign\n> >> > records to vessel transits. To do this, I currently create a CTE that\n> >> > uses a\n> >> > Window function (partitioning the data by mmsi and name ordered by\n> >> > epoch) to\n> >> > examine the time that has elapsed between successive position reports\n> >> > for\n> >> > individual vessels. For every position record for a vessel (as\n> >> > identified\n> >> > using mmsi and name), if the time elapsed between the current position\n> >> > record and the previous record (using the lag function) is less than\n> or\n> >> > equal to 2 hours, I assign the record a value of 0 to a CTE column\n> named\n> >> > tr_index. If the time elapsed is greater than 2 hours, I assign the\n> >> > record a\n> >> > value of 1 to the tr_index column. I then use the CTE to generate\n> >> > transit\n> >> > numbers by summing the values in the tr_index field across a Window\n> that\n> >> > also partitions the data by mmsi and name and is ordered by epoch.\n> This\n> >> > works, but is very slow (hours). The table is indexed (multi-column\n> >> > index on\n> >> > mmsi, name and index on epoch). Does anyone see a way to get what I am\n> >> > after\n> >> > in a more efficient manner. What I am after is an assignment of\n> transit\n> >> > number to vessels' position records based on whether the records were\n> >> > within\n> >> > two hours of each other. The SQL that I used is provided below. Any\n> >> > advice\n> >> > would be greatly appreciated...\n> >> >\n> >> >\n> >> >\n> >> > WITH\n> >> >\n> >> > cte_01 AS\n> >> >\n> >> > (\n> >> >\n> >> > SELECT\n> >> >\n> >> > a.id,\n> >> >\n> >> > a.mmsi,\n> >> >\n> >> > a.name,\n> >> >\n> >> > a.epoch,\n> >> >\n> >> > a.the_geom\n> >> >\n> >> > CASE\n> >> >\n> >> > WHEN ((a.epoch - lag(a.epoch) OVER w) / 60) > 120 THEN 1\n> >> >\n> >> > ELSE 0\n> >> >\n> >> > END AS tr_index\n> >> >\n> >> > FROM table a\n> >> >\n> >> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n> >> >\n> >> > )\n> >> >\n> >> >\n> >> >\n> >> >\n> >> >\n> >> > SELECT\n> >> >\n> >> > a.id,\n> >> >\n> >> > a.mmsi,\n> >> >\n> >> > a.name,\n> >> >\n> >> > a.epoch,\n> >> >\n> >> > a.the_geom,\n> >> >\n> >> > 1 + sum(a.tr_index) OVER w AS transit,\n> >> >\n> >> > a.active\n> >> >\n> >> > FROM cte_01 a\n> >> >\n> >> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n> >> >\n> >> >\n> >> >\n> >> > --\n> >> > Jeff\n>\n\nThanks again. The sorting does appear to be the issue. I will test out your cursor idea...On Mon, Mar 11, 2013 at 11:34 AM, Pavel Stehule <[email protected]> wrote:\n2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n> Pavel,\n>\n> Thanks for the response. I have not yet had the opportunity to use cursors,\n> but am now curious. Could you perhaps provide a bit more detail as to what\n> the implementation of your suggested approach would look like?\n\nan example:\n\n$$\nDECLARE\n r record;\n prev_r record;\n\nBEGIN\n FOR r IN SELECT * FROM a ORDER BY epoch, mmsi\n LOOP\n IF prev_r IS NOT NULL THEN\n /* do some counting */\n prev_r contains previous row, r contains current row\n do some\n RETURN NEXT .. /* return data in defined order */\n END IF;\n prev_r = r;\n END LOOP;\n\n\nProbably slow part of your query is sorting - first can be accelerated\nby index, but second (as CTE result cannot) - you can try increase\nwork_mem ??\n\nRegards\n\nPavel\n\n>\n>\n> On Mon, Mar 11, 2013 at 11:03 AM, Pavel Stehule <[email protected]>\n> wrote:\n>>\n>> Hello\n>>\n>> you can try procedural solution - use a cursor over ordered data in\n>> plpgsql and returns table\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>> 2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n>> > Greetings,\n>> >\n>> >\n>> >\n>> > I have a large table (~90 million rows) containing vessel positions. In\n>> > addition to a column that contains the location information (the_geom),\n>> > the\n>> > table also contains two columns that are used to uniquely identify the\n>> > vessel (mmsi and name) and a column containing the Unix time (epoch) at\n>> > which the position information was logged. I frequently need to assign\n>> > records to vessel transits. To do this, I currently create a CTE that\n>> > uses a\n>> > Window function (partitioning the data by mmsi and name ordered by\n>> > epoch) to\n>> > examine the time that has elapsed between successive position reports\n>> > for\n>> > individual vessels. For every position record for a vessel (as\n>> > identified\n>> > using mmsi and name), if the time elapsed between the current position\n>> > record and the previous record (using the lag function) is less than or\n>> > equal to 2 hours, I assign the record a value of 0 to a CTE column named\n>> > tr_index. If the time elapsed is greater than 2 hours, I assign the\n>> > record a\n>> > value of 1 to the tr_index column. I then use the CTE to generate\n>> > transit\n>> > numbers by summing the values in the tr_index field across a Window that\n>> > also partitions the data by mmsi and name and is ordered by epoch. This\n>> > works, but is very slow (hours). The table is indexed (multi-column\n>> > index on\n>> > mmsi, name and index on epoch). Does anyone see a way to get what I am\n>> > after\n>> > in a more efficient manner. What I am after is an assignment of transit\n>> > number to vessels' position records based on whether the records were\n>> > within\n>> > two hours of each other. The SQL that I used is provided below. Any\n>> > advice\n>> > would be greatly appreciated...\n>> >\n>> >\n>> >\n>> > WITH\n>> >\n>> > cte_01 AS\n>> >\n>> > (\n>> >\n>> > SELECT\n>> >\n>> > a.id,\n>> >\n>> > a.mmsi,\n>> >\n>> > a.name,\n>> >\n>> > a.epoch,\n>> >\n>> > a.the_geom\n>> >\n>> > CASE\n>> >\n>> > WHEN ((a.epoch - lag(a.epoch) OVER w) / 60) > 120 THEN 1\n>> >\n>> > ELSE 0\n>> >\n>> > END AS tr_index\n>> >\n>> > FROM table a\n>> >\n>> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>> >\n>> > )\n>> >\n>> >\n>> >\n>> >\n>> >\n>> > SELECT\n>> >\n>> > a.id,\n>> >\n>> > a.mmsi,\n>> >\n>> > a.name,\n>> >\n>> > a.epoch,\n>> >\n>> > a.the_geom,\n>> >\n>> > 1 + sum(a.tr_index) OVER w AS transit,\n>> >\n>> > a.active\n>> >\n>> > FROM cte_01 a\n>> >\n>> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>> >\n>> >\n>> >\n>> > --\n>> > Jeff",
"msg_date": "Mon, 11 Mar 2013 11:48:17 -0400",
"msg_from": "Jeff Adams - NOAA Affiliate <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large Table - Slow Window Functions (Better Approach?)"
},
{
"msg_contents": "So, I tested out the cursor approach, and it still chugs along for hours.\nIf the result set is large (and the available memory to process small),\ndoes it matter what goes on within the cursor. Will it still choke trying\nassemble and spit out the large result set?\n\nOn Mon, Mar 11, 2013 at 11:48 AM, Jeff Adams - NOAA Affiliate <\[email protected]> wrote:\n\n> Thanks again. The sorting does appear to be the issue. I will test out\n> your cursor idea...\n>\n>\n> On Mon, Mar 11, 2013 at 11:34 AM, Pavel Stehule <[email protected]>wrote:\n>\n>> 2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n>> > Pavel,\n>> >\n>> > Thanks for the response. I have not yet had the opportunity to use\n>> cursors,\n>> > but am now curious. Could you perhaps provide a bit more detail as to\n>> what\n>> > the implementation of your suggested approach would look like?\n>>\n>> an example:\n>>\n>> $$\n>> DECLARE\n>> r record;\n>> prev_r record;\n>>\n>> BEGIN\n>> FOR r IN SELECT * FROM a ORDER BY epoch, mmsi\n>> LOOP\n>> IF prev_r IS NOT NULL THEN\n>> /* do some counting */\n>> prev_r contains previous row, r contains current row\n>> do some\n>> RETURN NEXT .. /* return data in defined order */\n>> END IF;\n>> prev_r = r;\n>> END LOOP;\n>>\n>>\n>> Probably slow part of your query is sorting - first can be accelerated\n>> by index, but second (as CTE result cannot) - you can try increase\n>> work_mem ??\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> >\n>> >\n>> > On Mon, Mar 11, 2013 at 11:03 AM, Pavel Stehule <\n>> [email protected]>\n>> > wrote:\n>> >>\n>> >> Hello\n>> >>\n>> >> you can try procedural solution - use a cursor over ordered data in\n>> >> plpgsql and returns table\n>> >>\n>> >> Regards\n>> >>\n>> >> Pavel Stehule\n>> >>\n>> >> 2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n>> >> > Greetings,\n>> >> >\n>> >> >\n>> >> >\n>> >> > I have a large table (~90 million rows) containing vessel positions.\n>> In\n>> >> > addition to a column that contains the location information\n>> (the_geom),\n>> >> > the\n>> >> > table also contains two columns that are used to uniquely identify\n>> the\n>> >> > vessel (mmsi and name) and a column containing the Unix time (epoch)\n>> at\n>> >> > which the position information was logged. I frequently need to\n>> assign\n>> >> > records to vessel transits. To do this, I currently create a CTE that\n>> >> > uses a\n>> >> > Window function (partitioning the data by mmsi and name ordered by\n>> >> > epoch) to\n>> >> > examine the time that has elapsed between successive position reports\n>> >> > for\n>> >> > individual vessels. For every position record for a vessel (as\n>> >> > identified\n>> >> > using mmsi and name), if the time elapsed between the current\n>> position\n>> >> > record and the previous record (using the lag function) is less than\n>> or\n>> >> > equal to 2 hours, I assign the record a value of 0 to a CTE column\n>> named\n>> >> > tr_index. If the time elapsed is greater than 2 hours, I assign the\n>> >> > record a\n>> >> > value of 1 to the tr_index column. I then use the CTE to generate\n>> >> > transit\n>> >> > numbers by summing the values in the tr_index field across a Window\n>> that\n>> >> > also partitions the data by mmsi and name and is ordered by epoch.\n>> This\n>> >> > works, but is very slow (hours). The table is indexed (multi-column\n>> >> > index on\n>> >> > mmsi, name and index on epoch). Does anyone see a way to get what I\n>> am\n>> >> > after\n>> >> > in a more efficient manner. What I am after is an assignment of\n>> transit\n>> >> > number to vessels' position records based on whether the records were\n>> >> > within\n>> >> > two hours of each other. The SQL that I used is provided below. Any\n>> >> > advice\n>> >> > would be greatly appreciated...\n>> >> >\n>> >> >\n>> >> >\n>> >> > WITH\n>> >> >\n>> >> > cte_01 AS\n>> >> >\n>> >> > (\n>> >> >\n>> >> > SELECT\n>> >> >\n>> >> > a.id,\n>> >> >\n>> >> > a.mmsi,\n>> >> >\n>> >> > a.name,\n>> >> >\n>> >> > a.epoch,\n>> >> >\n>> >> > a.the_geom\n>> >> >\n>> >> > CASE\n>> >> >\n>> >> > WHEN ((a.epoch - lag(a.epoch) OVER w) / 60) > 120 THEN 1\n>> >> >\n>> >> > ELSE 0\n>> >> >\n>> >> > END AS tr_index\n>> >> >\n>> >> > FROM table a\n>> >> >\n>> >> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>> >> >\n>> >> > )\n>> >> >\n>> >> >\n>> >> >\n>> >> >\n>> >> >\n>> >> > SELECT\n>> >> >\n>> >> > a.id,\n>> >> >\n>> >> > a.mmsi,\n>> >> >\n>> >> > a.name,\n>> >> >\n>> >> > a.epoch,\n>> >> >\n>> >> > a.the_geom,\n>> >> >\n>> >> > 1 + sum(a.tr_index) OVER w AS transit,\n>> >> >\n>> >> > a.active\n>> >> >\n>> >> > FROM cte_01 a\n>> >> >\n>> >> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>> >> >\n>> >> >\n>> >> >\n>> >> > --\n>> >> > Jeff\n>>\n>\n\n\n-- \nJeffrey D. Adams\nContractor\nOAI, Inc.\nIn support of:\nNational Marine Fisheries Service\nOffice of Protected Resources\n1315 East West Hwy, Building SSMC3\nSilver Spring, MD 20910-3282\nphone: (301) 427-8434\nfax: (301) 713-0376\n\nSo, I tested out the cursor approach, and it still chugs along for hours. If the result set is large (and the available memory to process small), does it matter what goes on within the cursor. Will it still choke trying assemble and spit out the large result set?\nOn Mon, Mar 11, 2013 at 11:48 AM, Jeff Adams - NOAA Affiliate <[email protected]> wrote:\nThanks again. The sorting does appear to be the issue. I will test out your cursor idea...On Mon, Mar 11, 2013 at 11:34 AM, Pavel Stehule <[email protected]> wrote:\n2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n> Pavel,\n>\n> Thanks for the response. I have not yet had the opportunity to use cursors,\n> but am now curious. Could you perhaps provide a bit more detail as to what\n> the implementation of your suggested approach would look like?\n\nan example:\n\n$$\nDECLARE\n r record;\n prev_r record;\n\nBEGIN\n FOR r IN SELECT * FROM a ORDER BY epoch, mmsi\n LOOP\n IF prev_r IS NOT NULL THEN\n /* do some counting */\n prev_r contains previous row, r contains current row\n do some\n RETURN NEXT .. /* return data in defined order */\n END IF;\n prev_r = r;\n END LOOP;\n\n\nProbably slow part of your query is sorting - first can be accelerated\nby index, but second (as CTE result cannot) - you can try increase\nwork_mem ??\n\nRegards\n\nPavel\n\n>\n>\n> On Mon, Mar 11, 2013 at 11:03 AM, Pavel Stehule <[email protected]>\n> wrote:\n>>\n>> Hello\n>>\n>> you can try procedural solution - use a cursor over ordered data in\n>> plpgsql and returns table\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>> 2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>:\n>> > Greetings,\n>> >\n>> >\n>> >\n>> > I have a large table (~90 million rows) containing vessel positions. In\n>> > addition to a column that contains the location information (the_geom),\n>> > the\n>> > table also contains two columns that are used to uniquely identify the\n>> > vessel (mmsi and name) and a column containing the Unix time (epoch) at\n>> > which the position information was logged. I frequently need to assign\n>> > records to vessel transits. To do this, I currently create a CTE that\n>> > uses a\n>> > Window function (partitioning the data by mmsi and name ordered by\n>> > epoch) to\n>> > examine the time that has elapsed between successive position reports\n>> > for\n>> > individual vessels. For every position record for a vessel (as\n>> > identified\n>> > using mmsi and name), if the time elapsed between the current position\n>> > record and the previous record (using the lag function) is less than or\n>> > equal to 2 hours, I assign the record a value of 0 to a CTE column named\n>> > tr_index. If the time elapsed is greater than 2 hours, I assign the\n>> > record a\n>> > value of 1 to the tr_index column. I then use the CTE to generate\n>> > transit\n>> > numbers by summing the values in the tr_index field across a Window that\n>> > also partitions the data by mmsi and name and is ordered by epoch. This\n>> > works, but is very slow (hours). The table is indexed (multi-column\n>> > index on\n>> > mmsi, name and index on epoch). Does anyone see a way to get what I am\n>> > after\n>> > in a more efficient manner. What I am after is an assignment of transit\n>> > number to vessels' position records based on whether the records were\n>> > within\n>> > two hours of each other. The SQL that I used is provided below. Any\n>> > advice\n>> > would be greatly appreciated...\n>> >\n>> >\n>> >\n>> > WITH\n>> >\n>> > cte_01 AS\n>> >\n>> > (\n>> >\n>> > SELECT\n>> >\n>> > a.id,\n>> >\n>> > a.mmsi,\n>> >\n>> > a.name,\n>> >\n>> > a.epoch,\n>> >\n>> > a.the_geom\n>> >\n>> > CASE\n>> >\n>> > WHEN ((a.epoch - lag(a.epoch) OVER w) / 60) > 120 THEN 1\n>> >\n>> > ELSE 0\n>> >\n>> > END AS tr_index\n>> >\n>> > FROM table a\n>> >\n>> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>> >\n>> > )\n>> >\n>> >\n>> >\n>> >\n>> >\n>> > SELECT\n>> >\n>> > a.id,\n>> >\n>> > a.mmsi,\n>> >\n>> > a.name,\n>> >\n>> > a.epoch,\n>> >\n>> > a.the_geom,\n>> >\n>> > 1 + sum(a.tr_index) OVER w AS transit,\n>> >\n>> > a.active\n>> >\n>> > FROM cte_01 a\n>> >\n>> > WINDOW w AS (PARTITION BY a.mmsi, a.name ORDER BY a.epoch)\n>> >\n>> >\n>> >\n>> > --\n>> > Jeff\n-- Jeffrey D. AdamsContractorOAI, Inc.In support of:National Marine Fisheries ServiceOffice of Protected Resources1315 East West Hwy, Building SSMC3\nSilver Spring, MD 20910-3282phone: (301) 427-8434fax: (301) 713-0376",
"msg_date": "Tue, 12 Mar 2013 09:08:31 -0400",
"msg_from": "Jeff Adams - NOAA Affiliate <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large Table - Slow Window Functions (Better Approach?)"
},
{
"msg_contents": "2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>\n\n> Greetings,\n>\n>\n>\n> I have a large table (~90 million rows) containing vessel positions.\n>\n>\n> ...\n>\n>\n>\nCould you kindly provide a script to create the table and populate it with\nseveral sample\nrows, please? Also, provide the desired output for the sample rows.\n\nIt would be good to take a look on the “EXPLAIN ANALYZE” output of your\nquery,\nplease, share http://explain.depesz.com/ link.\n\n\n-- \nVictor Y. Yegorov\n\n2013/3/11 Jeff Adams - NOAA Affiliate <[email protected]>\nGreetings,\n \nI have a large table (~90 million rows) containing vessel\npositions.\n ...\n\nCould you kindly provide a script to create the table and populate it with several samplerows, please? Also, provide the desired output for the sample rows.\nIt would be good to take a look on the “EXPLAIN ANALYZE” output of your query,please, share http://explain.depesz.com/ link.\n-- Victor Y. Yegorov",
"msg_date": "Tue, 12 Mar 2013 15:36:54 +0200",
"msg_from": "Victor Yegorov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large Table - Slow Window Functions (Better Approach?)"
}
] |
[
{
"msg_contents": "I have a very heavy OLTP application in the field.\n\nWe have two PostgreSQL databases (9.0.x)/FreeBSD 8.1/amd64 - one is a\nlistener which just stores data digests, the other is the actual\ndatabase.\n\nThe digests from the listener are processed by multi-threaded daemons\nand inserted into the main database.\n\nEvery so often, we see the queues on the listener grow and get\nsignificantly behind.\n\nChecking the system metrics, we do not see any issue with either memory,\nCPU, or IO (using top and iostat), so it would appear we are being hit\nby contention. When data insertion rates slow down we can see a lot of\nthe potgresql processes in a semwait state.\n\n \n\nThe data which is coming in from the daemons is inserted into a\ntemporary table, which fires of a trigger which in turn calls a stored\nprocedure (massive) which processes the data input.\n\nWe also have another daemon which runs in the background continuously\ncreating a materialized view of a dashboard. If this daemon does not\nrun, the queues do not grow (or the rate of growth decreases if some\nother heavy processing is going on such as a large report being\ngenerated), so it appears this is one of the primary causes for the\ncontention which is causing the semwaits.\n\n \n\nThe data is inserted into 3 partitioned tables, each of which is fairly\nwide. The daemon which processes the dashboard looks at only 2 fields\n(state and last value), using the Devices and Tests tables. If I were\nto create a 2 more partitioned tables which only holds the columns in\nquestion (as well as the columns necessary to associate the rows),\nshould this reduce the contention? \n\n \n\nThe partitioned tables are as such:\n\n \n\nDevices -> Tests -> Statistical info on tests\n\n \n\nI was thinking of adding two more partitioned tables\n\n \n\nAltdevices -> DashboardTests\n\n \n\nThat way, when we process the dashboard we do not touch any of the 3\nprimary tables, which are the ones which are constantly being pounded\non.\n\nLooking at the stats on the server (Xact committed), we are processing\napproximately 4000 transactions per second.\n\n \n\nOn another note, this setup is using streaming replication to a\nsecondary server which is used as a read only server for reporting.\nWould accessing data from the secondary server somehow cause contention\non the primary server? From the patterns of behavior, it would appear\nso (when large reports are generated we are seeing some effect on the\ndata insertion rates).\n\n \n\nThanks in advance,\n\n \n\nBenjamin\n\n \n\n\nI have a very heavy OLTP application in the field.We have two PostgreSQL databases (9.0.x)/FreeBSD 8.1/amd64 – one is a listener which just stores data digests, the other is the actual database.The digests from the listener are processed by multi-threaded daemons and inserted into the main database.Every so often, we see the queues on the listener grow and get significantly behind.Checking the system metrics, we do not see any issue with either memory, CPU, or IO (using top and iostat), so it would appear we are being hit by contention. When data insertion rates slow down we can see a lot of the potgresql processes in a semwait state. The data which is coming in from the daemons is inserted into a temporary table, which fires of a trigger which in turn calls a stored procedure (massive) which processes the data input.We also have another daemon which runs in the background continuously creating a materialized view of a dashboard. If this daemon does not run, the queues do not grow (or the rate of growth decreases if some other heavy processing is going on such as a large report being generated), so it appears this is one of the primary causes for the contention which is causing the semwaits. The data is inserted into 3 partitioned tables, each of which is fairly wide. The daemon which processes the dashboard looks at only 2 fields (state and last value), using the Devices and Tests tables. If I were to create a 2 more partitioned tables which only holds the columns in question (as well as the columns necessary to associate the rows), should this reduce the contention? The partitioned tables are as such: Devices -> Tests -> Statistical info on tests I was thinking of adding two more partitioned tables Altdevices -> DashboardTests That way, when we process the dashboard we do not touch any of the 3 primary tables, which are the ones which are constantly being pounded on.Looking at the stats on the server (Xact committed), we are processing approximately 4000 transactions per second. On another note, this setup is using streaming replication to a secondary server which is used as a read only server for reporting. Would accessing data from the secondary server somehow cause contention on the primary server? From the patterns of behavior, it would appear so (when large reports are generated we are seeing some effect on the data insertion rates). Thanks in advance, Benjamin",
"msg_date": "Mon, 11 Mar 2013 09:53:33 -0600",
"msg_from": "\"Benjamin Krajmalnik\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "The dreaded semwait on FreeBSD"
}
] |
[
{
"msg_contents": "Hey everyone!\n\nA developer was complaining about a view he created to abstract an added \ncolumn in a left join. He was contemplating denormalizing the added \nvalue into the parent table and using a trigger to maintain it instead, \nand I obviously looked into the problem. I noticed the view was \nincurring a sequence scan on an obvious index condition, but the regular \njoin version was not.\n\nCurious, I whipped up this test case:\n\nCREATE TABLE foo (id BIGINT, small_label VARCHAR);\nINSERT INTO foo (id) VALUES (generate_series(1, 10000));\nALTER TABLE foo ADD CONSTRAINT pk_foo_id PRIMARY KEY (id);\n\nCREATE TABLE bar (id BIGINT, foo_id BIGINT);\n\nINSERT INTO bar (id, foo_id)\nSELECT a, a%10000\n FROM generate_series(1, 100000) a;\n\nALTER TABLE bar ADD CONSTRAINT pk_bar_id PRIMARY KEY (id);\n\nCREATE TABLE tiny_foo (small_label VARCHAR NOT NULL PRIMARY KEY);\nINSERT INTO tiny_foo (small_label)\nVALUES (('yes', 'we', 'have', 'no', 'bananas'));\n\nUPDATE foo SET small_label = 'bananas' WHERE id=750;\n\nANALYZE foo;\nANALYZE bar;\nANALYZE tiny_foo;\n\nCREATE VIEW v_slow_view AS\nSELECT foo.*, tf.small_label IS NOT NULL AS has_small_label\n FROM foo\n LEFT JOIN tiny_foo tf USING (small_label);\n\n\nNow, this is with PostgreSQL 9.1.8, basically default everything in a \nbase Ubuntu install. So, the good query plan using all tables directly:\n\nSELECT bar.*, foo.*, tf.small_label IS NOT NULL AS has_small_label\n FROM bar\n LEFT JOIN foo ON (foo.id = bar.foo_id)\n LEFT JOIN tiny_foo tf USING (small_label)\n WHERE bar.id IN (750, 1750, 2750)\n ORDER BY bar.id;\n\ndoes this:\n\nIndex Scan using pk_foo_id on foo (cost=0.00..8.27 rows=1 width=16)\n Index Cond: (id = bar.foo_id)\n\nThe bad one using the view:\n\nSELECT bar.*, sv.*\n FROM bar\n LEFT JOIN v_slow_view sv ON (sv.id = bar.foo_id)\n WHERE bar.id IN (750, 1750, 2750)\n ORDER BY bar.id;\n\nMysteriously, does this:\n\nSeq Scan on foo (cost=0.00..145.00 rows=10000 width=16)\n\nI'm... perplexed. This test case is way too shallow to be affected by \njoin_collapse_limit and its ilk, so I'm not sure what's going on here. I \nsense an optimization fence, but I can't see where.\n\nThanks in advance!\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Mar 2013 17:29:13 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query when used in a view"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> A developer was complaining about a view he created to abstract an added \n> column in a left join. ...\n> Curious, I whipped up this test case:\n\n> CREATE VIEW v_slow_view AS\n> SELECT foo.*, tf.small_label IS NOT NULL AS has_small_label\n> FROM foo\n> LEFT JOIN tiny_foo tf USING (small_label);\n\n> SELECT bar.*, foo.*, tf.small_label IS NOT NULL AS has_small_label\n> FROM bar\n> LEFT JOIN foo ON (foo.id = bar.foo_id)\n> LEFT JOIN tiny_foo tf USING (small_label)\n> WHERE bar.id IN (750, 1750, 2750)\n> ORDER BY bar.id;\n\n> SELECT bar.*, sv.*\n> FROM bar\n> LEFT JOIN v_slow_view sv ON (sv.id = bar.foo_id)\n> WHERE bar.id IN (750, 1750, 2750)\n> ORDER BY bar.id;\n\nThese queries are not actually equivalent. In the first one, it is\nimpossible for \"has_small_label\" to read out as NULL: it will either be\ntrue or false. However, in the second one, the IS NOT NULL is evaluated\nbelow the LEFT JOIN to \"sv\", and therefore it is required that the query\nreturn NULL for \"has_small_label\" in any row where bar.foo_id lacks a\njoin partner.\n\nTo implement that behavior correctly, we're forced to form the\nfoo-to-tiny_foo join first, then do the left join with bar (which'll\nreplace RHS columns by nulls where necessary).\n\nAnd that means that you get the inefficient plan wherein the\nfoo-to-tiny_foo join is computed in its entirety.\n\n9.2 does this case better, by virtue of the \"parameterized plan\" stuff,\nwhich exists specifically to let us use nestloop-with-inner-indexscan\nplans even when there are some join order restrictions complicating\nmatters.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Mar 2013 19:56:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query when used in a view"
},
{
"msg_contents": "On 03/11/2013 06:56 PM, Tom Lane wrote:\n\n> And that means that you get the inefficient plan wherein the\n> foo-to-tiny_foo join is computed in its entirety.\n\n:(\n\nThat's unfortunate, though I guess it makes sense. I moved the join in \nthe view into the SELECT clause as an EXISTS, and that seems to work \nwithout major adverse effects. Apparently the tiny table really will be \ntiny in actual use, so impact should be minimal.\n\nI just really don't like using subselects that way. :)\n\nThanks, Tom!\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Mar 2013 08:49:04 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query when used in a view"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe have one table with list of \"records for processing\"...\n\nWe loop trough that table and call one long runing function:\n\n do_the_math_for_record(record_id)\n\nwhich use different tables for select related rows for input record_id, do\nsome calculations and insert results in two tables...\n\nand we have made 1 function process_all_records()\n\nwhat simply does: SELECT do_the_math_for_record(record_id) FROM\nrecords_for_processing\n\nWhen we run that function - it last about 4 minutes...\n\n\nThere are about 300 rows in records_for_processing... we have logged the\ntime on the beginning of do_the_math, and the time in end of do the math...\nand noticed that processing each row, last between 0.5 to 2 seconds...\n\nso our do_the_math looks like:\n\nPERFORM log_time(record_id, clock_timestamp(), 1)\n\nPERFORM do_the_math_and_save_results(record_id);\n\nPERFORM log_time(record_id, clock_timestamp(), 2)\n\n\nThen we thought, if we take all \"records for processing\" and process each\nin separate connection - it should last longer...\n\nbut - got worse result! (using 30 concurrent connections...)... about 7\nmins...\n\nif we reduce concurrent connections on 10 - we got result in approx the\nsame time as sequential processing...\n\nbut - if replace do_the_math_and_save_results with pg_sleep(1); To simulate\nlong running function so processing each row - last 1 sec...\n\nSequential processing last as expected 300 seconds!\n\nConcurrent processing last faster with higher number of\nconcurrent connections - about 30 seconds with 30 connections! (much faster\n- and expected...)\n\nhowever, if we return our: do_the_math_and_save_results - we can't get\nbetter results in concurrent processing...\n\nwith higher number of conccurent connections - result is worse... also we\nhave noticed that for some records difference between end_time and\nstart_time si even longer than 1 min - but it is random - not always on the\nsame id... i.e. in this concurrent run lasts 1 min - in next 1 sec - but\nsome other takes about 1 min...\n\nAny idea - why? :)\n\nIt says to me - that there is somewhere lock on some tables - so probably\nour concurrent connections wait - to other finish... but I cant figure out:\nwhat and why...\n\ndo_the_math_and_save results - selects data from 10 other tables,\ncalculates something, and results inserts in other tables...\n\nthere are about 3 tracking tables with (record_id - other data...... and\nabout 7 settings tables what we join to tracking tables to get all\ninfo...), then do the math with that info - and insert results..\n\nwe don't do any update... (to have possibility two connections want to\nupdate the same row in the same table)\n\ndata from tracking_tables - should be separate sets of data for two\ndifferenet record_ids...\n\n(joined rows from settings tables could be common - for two sets of\ndifferent record_id)\n\nbut - even they are the same set - SELECTs should not lock the rows in\ntables...\n\nThere are places where we do:\n\nINSERT INTO result_table (columns)\nSELECT query (tracking and settings tables joined)\n\nIs there a chance it does some lock somewhere?\n\ncan above query be run \"concurrently\"?\n\nMany thanks,\n\nMisa\n\nHi all,We have one table with list of \"records for processing\"...We loop trough that table and call one long runing function:\n do_the_math_for_record(record_id) which use different tables for select related rows for input record_id, do some calculations and insert results in two tables...\nand we have made 1 function process_all_records()what simply does: SELECT do_the_math_for_record(record_id) FROM records_for_processing\nWhen we run that function - it last about 4 minutes...There are about 300 rows in records_for_processing... we have logged the time on the beginning of do_the_math, and the time in end of do the math... and noticed that processing each row, last between 0.5 to 2 seconds...\nso our do_the_math looks like:PERFORM log_time(record_id, clock_timestamp(), 1)PERFORM do_the_math_and_save_results(record_id);\nPERFORM log_time(record_id, clock_timestamp(), 2)Then we thought, if we take all \"records for processing\" and process each in separate connection - it should last longer...\nbut - got worse result! (using 30 concurrent connections...)... about 7 mins...if we reduce concurrent connections on 10 - we got result in approx the same time as sequential processing...\nbut - if replace do_the_math_and_save_results with pg_sleep(1); To simulate long running function so processing each row - last 1 sec...Sequential processing last as expected 300 seconds!\nConcurrent processing last faster with higher number of concurrent connections - about 30 seconds with 30 connections! (much faster - and expected...)however, if we return our: do_the_math_and_save_results - we can't get better results in concurrent processing...\nwith higher number of conccurent connections - result is worse... also we have noticed that for some records difference between end_time and start_time si even longer than 1 min - but it is random - not always on the same id... i.e. in this concurrent run lasts 1 min - in next 1 sec - but some other takes about 1 min...\nAny idea - why? :)It says to me - that there is somewhere lock on some tables - so probably our concurrent connections wait - to other finish... but I cant figure out: what and why...\ndo_the_math_and_save results - selects data from 10 other tables, calculates something, and results inserts in other tables... there are about 3 tracking tables with (record_id - other data...... and about 7 settings tables what we join to tracking tables to get all info...), then do the math with that info - and insert results..\nwe don't do any update... (to have possibility two connections want to update the same row in the same table)data from tracking_tables - should be separate sets of data for two differenet record_ids...\n(joined rows from settings tables could be common - for two sets of different record_id)but - even they are the same set - SELECTs should not lock the rows in tables...\nThere are places where we do:INSERT INTO result_table (columns)SELECT query (tracking and settings tables joined)\nIs there a chance it does some lock somewhere?can above query be run \"concurrently\"?Many thanks,\nMisa",
"msg_date": "Tue, 12 Mar 2013 04:55:26 +0100",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow concurrent processing"
},
{
"msg_contents": "Hi,\n\nResearching deeply my problem with concurrent processing i have found:\n\nhttp://postgresql.1045698.n5.nabble.com/WHY-transaction-waits-for-another-transaction-tp2142627p2142630.html\n\n\n\"The more likely suspect is a foreign key conflict.\nAre both transactions inserting/updating rows that could reference\nthe same row(s) in a master table?\" - Tom Lane\n\nThis is exactly the case (in my case) - several connections tries to insert\nrows in the same table... but some columns are referenced to settings\ntables... and there is possibility that two rows what we want to\ninsert reference the same row in settings table...\n\nAny idea how to make this process faster?\n\nmaybe to make new tables the same structure as results tables.... with no\nindexes fk etc... during processing insert into un-referenced tables -\nwhen full process finish - move rows from unreferenced to real results\ntables?\n\n\nThanks,\n\nMisa\n\n\n\n\n\n\n\n\n\n\n2013/3/12 Misa Simic <[email protected]>\n\n> Hi all,\n>\n> We have one table with list of \"records for processing\"...\n>\n> We loop trough that table and call one long runing function:\n>\n> do_the_math_for_record(record_id)\n>\n> which use different tables for select related rows for input record_id, do\n> some calculations and insert results in two tables...\n>\n> and we have made 1 function process_all_records()\n>\n> what simply does: SELECT do_the_math_for_record(record_id) FROM\n> records_for_processing\n>\n> When we run that function - it last about 4 minutes...\n>\n>\n> There are about 300 rows in records_for_processing... we have logged the\n> time on the beginning of do_the_math, and the time in end of do the math...\n> and noticed that processing each row, last between 0.5 to 2 seconds...\n>\n> so our do_the_math looks like:\n>\n> PERFORM log_time(record_id, clock_timestamp(), 1)\n>\n> PERFORM do_the_math_and_save_results(record_id);\n>\n> PERFORM log_time(record_id, clock_timestamp(), 2)\n>\n>\n> Then we thought, if we take all \"records for processing\" and process each\n> in separate connection - it should last longer...\n>\n> but - got worse result! (using 30 concurrent connections...)... about 7\n> mins...\n>\n> if we reduce concurrent connections on 10 - we got result in approx the\n> same time as sequential processing...\n>\n> but - if replace do_the_math_and_save_results with pg_sleep(1); To\n> simulate long running function so processing each row - last 1 sec...\n>\n> Sequential processing last as expected 300 seconds!\n>\n> Concurrent processing last faster with higher number of\n> concurrent connections - about 30 seconds with 30 connections! (much faster\n> - and expected...)\n>\n> however, if we return our: do_the_math_and_save_results - we can't get\n> better results in concurrent processing...\n>\n> with higher number of conccurent connections - result is worse... also we\n> have noticed that for some records difference between end_time and\n> start_time si even longer than 1 min - but it is random - not always on the\n> same id... i.e. in this concurrent run lasts 1 min - in next 1 sec - but\n> some other takes about 1 min...\n>\n> Any idea - why? :)\n>\n> It says to me - that there is somewhere lock on some tables - so probably\n> our concurrent connections wait - to other finish... but I cant figure out:\n> what and why...\n>\n> do_the_math_and_save results - selects data from 10 other tables,\n> calculates something, and results inserts in other tables...\n>\n> there are about 3 tracking tables with (record_id - other data...... and\n> about 7 settings tables what we join to tracking tables to get all\n> info...), then do the math with that info - and insert results..\n>\n> we don't do any update... (to have possibility two connections want to\n> update the same row in the same table)\n>\n> data from tracking_tables - should be separate sets of data for two\n> differenet record_ids...\n>\n> (joined rows from settings tables could be common - for two sets of\n> different record_id)\n>\n> but - even they are the same set - SELECTs should not lock the rows in\n> tables...\n>\n> There are places where we do:\n>\n> INSERT INTO result_table (columns)\n> SELECT query (tracking and settings tables joined)\n>\n> Is there a chance it does some lock somewhere?\n>\n> can above query be run \"concurrently\"?\n>\n> Many thanks,\n>\n> Misa\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\nHi,Researching deeply my problem with concurrent processing i have found:http://postgresql.1045698.n5.nabble.com/WHY-transaction-waits-for-another-transaction-tp2142627p2142630.html\n\"The more likely suspect is a foreign key conflict. \nAre both transactions inserting/updating rows that could reference \nthe same row(s) in a master table?\" - Tom LaneThis is exactly the case (in my case) - several connections tries to insert rows in the same table... but some columns are referenced to settings tables... and there is possibility that two rows what we want to insert reference the same row in settings table...\nAny idea how to make this process faster?maybe to make new tables the same structure as results tables.... with no indexes fk etc... during processing insert into un-referenced tables - when full process finish - move rows from unreferenced to real results tables?\nThanks,Misa\n2013/3/12 Misa Simic <[email protected]>\nHi all,We have one table with list of \"records for processing\"...\nWe loop trough that table and call one long runing function:\n do_the_math_for_record(record_id) which use different tables for select related rows for input record_id, do some calculations and insert results in two tables...\nand we have made 1 function process_all_records()what simply does: SELECT do_the_math_for_record(record_id) FROM records_for_processing\nWhen we run that function - it last about 4 minutes...There are about 300 rows in records_for_processing... we have logged the time on the beginning of do_the_math, and the time in end of do the math... and noticed that processing each row, last between 0.5 to 2 seconds...\nso our do_the_math looks like:PERFORM log_time(record_id, clock_timestamp(), 1)PERFORM do_the_math_and_save_results(record_id);\nPERFORM log_time(record_id, clock_timestamp(), 2)Then we thought, if we take all \"records for processing\" and process each in separate connection - it should last longer...\nbut - got worse result! (using 30 concurrent connections...)... about 7 mins...if we reduce concurrent connections on 10 - we got result in approx the same time as sequential processing...\nbut - if replace do_the_math_and_save_results with pg_sleep(1); To simulate long running function so processing each row - last 1 sec...Sequential processing last as expected 300 seconds!\nConcurrent processing last faster with higher number of concurrent connections - about 30 seconds with 30 connections! (much faster - and expected...)however, if we return our: do_the_math_and_save_results - we can't get better results in concurrent processing...\nwith higher number of conccurent connections - result is worse... also we have noticed that for some records difference between end_time and start_time si even longer than 1 min - but it is random - not always on the same id... i.e. in this concurrent run lasts 1 min - in next 1 sec - but some other takes about 1 min...\nAny idea - why? :)It says to me - that there is somewhere lock on some tables - so probably our concurrent connections wait - to other finish... but I cant figure out: what and why...\ndo_the_math_and_save results - selects data from 10 other tables, calculates something, and results inserts in other tables... there are about 3 tracking tables with (record_id - other data...... and about 7 settings tables what we join to tracking tables to get all info...), then do the math with that info - and insert results..\nwe don't do any update... (to have possibility two connections want to update the same row in the same table)data from tracking_tables - should be separate sets of data for two differenet record_ids...\n(joined rows from settings tables could be common - for two sets of different record_id)but - even they are the same set - SELECTs should not lock the rows in tables...\nThere are places where we do:INSERT INTO result_table (columns)SELECT query (tracking and settings tables joined)\nIs there a chance it does some lock somewhere?can above query be run \"concurrently\"?Many thanks,\nMisa",
"msg_date": "Tue, 12 Mar 2013 15:13:06 +0100",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow concurrent processing"
},
{
"msg_contents": "On 03/11/2013 08:55 PM, Misa Simic wrote:\n> Hi all,\n>\n> We have one table with list of \"records for processing\"...\n>\n> We loop trough that table and call one long runing function:\n>\n> do_the_math_for_record(record_id)...<snip>...\n>\n> but - if replace do_the_math_and_save_results with pg_sleep(1); To \n> simulate long running function so processing each row - last 1 sec...\n>\n> Sequential processing last as expected 300 seconds!\n>\n> Concurrent processing last faster with higher number of \n> concurrent connections - about 30 seconds with 30 connections! (much \n> faster - and expected...)\n>\n> however, if we return our: do_the_math_and_save_results - we can't get \n> better results in concurrent processing...\n\nSleep will not have any significant impact on CPU, memory or disk use \nand thus is not a simulation of actual processing.\n\nAll you have really shown us so far is a black box. Please provide an \noverview of your schemas and the type of processing you are attempting \non them.\n\nCheers,\nSteve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Mar 2013 07:20:18 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow concurrent processing"
},
{
"msg_contents": "Thanks Steve\n\nWell, the full story is too complex - but point was - whatever blackbox\ndoes - it last 0.5 to 2secs per 1 processed record (maybe I was wrong but I\nthought the reason why it takes the time how much it needs to actually do\nthe task -CPU/IO/memory whatever is not that important....) - so I really\ndon't see difference between: call web service, insert row in the table\n(takes 3 secs) and sleep 3 seconds - insert result in the table...\n\nif we do above task for two things sequential - it will last 6 secs...but\nif we do it \"concurentelly\" - it should last 3 secs... (in theory :) )\n\nI was guessed somewhere is lock - but wasn't clear where/why when there are\nno updates - just inserts...\n\nBut I haven't know that during INSERT is done row lock on refferenced\ntables as well - from FK columns...\n\nSo I guess now it is cause of the problem...\n\nWe will see how it goes with insert into unlogged tables with no FK...\n\nMany thanks,\n\nMisa\n\n\n2013/3/12 Steve Crawford <[email protected]>\n\n> On 03/11/2013 08:55 PM, Misa Simic wrote:\n>\n>> Hi all,\n>>\n>> We have one table with list of \"records for processing\"...\n>>\n>> We loop trough that table and call one long runing function:\n>>\n>> do_the_math_for_record(record_**id)...<snip>...\n>>\n>>\n>> but - if replace do_the_math_and_save_results with pg_sleep(1); To\n>> simulate long running function so processing each row - last 1 sec...\n>>\n>> Sequential processing last as expected 300 seconds!\n>>\n>> Concurrent processing last faster with higher number of concurrent\n>> connections - about 30 seconds with 30 connections! (much faster - and\n>> expected...)\n>>\n>> however, if we return our: do_the_math_and_save_results - we can't get\n>> better results in concurrent processing...\n>>\n>\n> Sleep will not have any significant impact on CPU, memory or disk use and\n> thus is not a simulation of actual processing.\n>\n> All you have really shown us so far is a black box. Please provide an\n> overview of your schemas and the type of processing you are attempting on\n> them.\n>\n> Cheers,\n> Steve\n>\n>\n\nThanks SteveWell, the full story is too complex - but point was - whatever blackbox does - it last 0.5 to 2secs per 1 processed record (maybe I was wrong but I thought the reason why it takes the time how much it needs to actually do the task -CPU/IO/memory whatever is not that important....) - so I really don't see difference between: call web service, insert row in the table (takes 3 secs) and sleep 3 seconds - insert result in the table...\nif we do above task for two things sequential - it will last 6 secs...but if we do it \"concurentelly\" - it should last 3 secs... (in theory :) )\nI was guessed somewhere is lock - but wasn't clear where/why when there are no updates - just inserts...But I haven't know that during INSERT is done row lock on refferenced tables as well - from FK columns...\nSo I guess now it is cause of the problem...We will see how it goes with insert into unlogged tables with no FK...\nMany thanks,Misa2013/3/12 Steve Crawford <[email protected]>\nOn 03/11/2013 08:55 PM, Misa Simic wrote:\n\nHi all,\n\nWe have one table with list of \"records for processing\"...\n\nWe loop trough that table and call one long runing function:\n\n do_the_math_for_record(record_id)...<snip>...\n\nbut - if replace do_the_math_and_save_results with pg_sleep(1); To simulate long running function so processing each row - last 1 sec...\n\nSequential processing last as expected 300 seconds!\n\nConcurrent processing last faster with higher number of concurrent connections - about 30 seconds with 30 connections! (much faster - and expected...)\n\nhowever, if we return our: do_the_math_and_save_results - we can't get better results in concurrent processing...\n\n\nSleep will not have any significant impact on CPU, memory or disk use and thus is not a simulation of actual processing.\n\nAll you have really shown us so far is a black box. Please provide an overview of your schemas and the type of processing you are attempting on them.\n\nCheers,\nSteve",
"msg_date": "Tue, 12 Mar 2013 16:06:46 +0100",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow concurrent processing"
},
{
"msg_contents": "On 03/12/2013 08:06 AM, Misa Simic wrote:\n> Thanks Steve\n>\n> Well, the full story is too complex - but point was - whatever \n> blackbox does - it last 0.5 to 2secs per 1 processed record (maybe I \n> was wrong but I thought the reason why it takes the time how much it \n> needs to actually do the task -CPU/IO/memory whatever is not that \n> important....) - so I really don't see difference between: call web \n> service, insert row in the table (takes 3 secs) and sleep 3 seconds - \n> insert result in the table...\n>\n> if we do above task for two things sequential - it will last 6 \n> secs...but if we do it \"concurentelly\" - it should last 3 secs... (in \n> theory :) )\n\nNot at all - even in \"theory.\" Sleep involves little, if any, contention \nfor resources. Real processing does. So if a process requires 100% of \navailable CPU then one process gets it all while many running \nsimultaneously will have to share the available CPU resource and thus \neach will take longer to complete. Or, if you prefer, think of a file \ndownload. If it takes an hour to download a 1GB file it doesn't mean \nthat you can download two 1GB files concurrently in one hour even if \n\"simulating\" the process by a sleep(3600) suggests it is possible.\n\nI should note, however, that depending on the resource that is limiting \nyour speed there is often room for optimization through simultaneous \nprocessing - especially when processes are CPU bound. Since PostgreSQL \nassociates each back-end with one CPU *core*, you can have a situation \nwhere one core is spinning and the others are more-or-less idle. In \nthose cases you may see an improvement by increasing the number of \nsimultaneous processes to somewhere shy of the number of cores.\n\n>\n> I was guessed somewhere is lock - but wasn't clear where/why when \n> there are no updates - just inserts...\n>\n> But I haven't know that during INSERT is done row lock on refferenced \n> tables as well - from FK columns...\n>\n> So I guess now it is cause of the problem...\n>\n> We will see how it goes with insert into unlogged tables with no FK...\n>\n\nIt will almost certainly go faster as you have eliminated integrity and \ndata-safety. This may be acceptable to you (non-real-time crunching of \ndata that can be reloaded from external sources or temporary processing \nthat is ultimately written back to durable storage) but it doesn't mean \nyou have identified the actual cause.\n\nOne thing you didn't state. Is all this processing taking place in \nPostgreSQL? (i.e. update foo set bar = do_the_math(baz, zap, boom)) \nwhere do_the_math is a PL/pgSQL, PL/Python, ... or are external \nprocesses involved?\n\nCheers,\nSteve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Mar 2013 08:54:57 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow concurrent processing"
},
{
"msg_contents": "On Tue, Mar 12, 2013 at 7:13 AM, Misa Simic <[email protected]> wrote:\n\n> Hi,\n>\n> Researching deeply my problem with concurrent processing i have found:\n>\n>\n> http://postgresql.1045698.n5.nabble.com/WHY-transaction-waits-for-another-transaction-tp2142627p2142630.html\n>\n>\n> \"The more likely suspect is a foreign key conflict.\n> Are both transactions inserting/updating rows that could reference\n> the same row(s) in a master table?\" - Tom Lane\n>\n> This is exactly the case (in my case) - several connections tries to\n> insert rows in the same table... but some columns are referenced to\n> settings tables... and there is possibility that two rows what we want to\n> insert reference the same row in settings table...\n>\n\nUnless you are running an ancient version of PostgreSQL (<8.1), this would\nno longer pose a problem.\n\nCheers,\n\nJeff\n\nOn Tue, Mar 12, 2013 at 7:13 AM, Misa Simic <[email protected]> wrote:\nHi,Researching deeply my problem with concurrent processing i have found:http://postgresql.1045698.n5.nabble.com/WHY-transaction-waits-for-another-transaction-tp2142627p2142630.html\n\"The more likely suspect is a foreign key conflict. \nAre both transactions inserting/updating rows that could reference \nthe same row(s) in a master table?\" - Tom LaneThis is exactly the case (in my case) - several connections tries to insert rows in the same table... but some columns are referenced to settings tables... and there is possibility that two rows what we want to insert reference the same row in settings table...\nUnless you are running an ancient version of PostgreSQL (<8.1), this would no longer pose a problem.Cheers,Jeff",
"msg_date": "Tue, 12 Mar 2013 10:09:21 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow concurrent processing"
},
{
"msg_contents": "Thanks Steve,\n\nOf course I thought under the limits... I haven't thought there are that\nkind of problems(CPU/Memory/io) because of there are no degradation during\nlong running process - on other sides... i.e. some complex query - run\nwhen long running process is off and run it when long runing process is\nunder the go - takes similar time etc... (and that query uses as well\ntables involved in long runing do_the_math_ function - but of course dont\nask at all for potential rows what will long runining functin produce - but\nwould not get it anyway even asks...)\n\nall the processing is under postgres... (but no updates at all - that would\npoint me directly to potential row_lock problem...)\n\nTo process one record - is again deeper sequential processing thing with\nlot if/else etc...\n\nSomething like:\n\nGetMasterInfo about RecordID (join several settings table related to input\nRecordID)\n\nIf that RecordID is that type then\n apply_callculation1(recordID)\nelse\n apply_calculation2(recordID)\n and so on...\n\nthen for exapmple apply_calculation1 says:\n\nget all records for this recordID between related period... (From tracking\ntables)\nfor each day... take status from that day... calculate hours what match\ndifferent time periods during the day, and use different rate for each -\nbut again that rate - in some cases depends on Total hours spent in the\nweek that day belongs for that record_id etc etc...\nso basicaly insert in result_table1 - splited amounts by category for each\nday applying different calculations for each category...\nThen later sum things from result_table1 - insert_them in result_table2...\nand do again further calculations based on info in resut_table2 and insert\nresults in the same table...\n\nAll that math for 1 thing - last 0.5 to 2secs - depending on lot of things\netc,,,\n\n\nsleep(1) - was just simplified thing to spent required time for\nprocessing... not to help about hardware limits and bandwith :)\n\n\njust the fact we can run complex query during long processing function is\nunder run - said me there are no hardware resource problems...\n\n\nMany thanks,\n\nMisa\n\n\n\n\n\n\n\n\n2013/3/12 Steve Crawford <[email protected]>\n\n> On 03/12/2013 08:06 AM, Misa Simic wrote:\n>\n>> Thanks Steve\n>>\n>> Well, the full story is too complex - but point was - whatever blackbox\n>> does - it last 0.5 to 2secs per 1 processed record (maybe I was wrong but I\n>> thought the reason why it takes the time how much it needs to actually do\n>> the task -CPU/IO/memory whatever is not that important....) - so I really\n>> don't see difference between: call web service, insert row in the table\n>> (takes 3 secs) and sleep 3 seconds - insert result in the table...\n>>\n>> if we do above task for two things sequential - it will last 6 secs...but\n>> if we do it \"concurentelly\" - it should last 3 secs... (in theory :) )\n>>\n>\n> Not at all - even in \"theory.\" Sleep involves little, if any, contention\n> for resources. Real processing does. So if a process requires 100% of\n> available CPU then one process gets it all while many running\n> simultaneously will have to share the available CPU resource and thus each\n> will take longer to complete. Or, if you prefer, think of a file download.\n> If it takes an hour to download a 1GB file it doesn't mean that you can\n> download two 1GB files concurrently in one hour even if \"simulating\" the\n> process by a sleep(3600) suggests it is possible.\n>\n> I should note, however, that depending on the resource that is limiting\n> your speed there is often room for optimization through simultaneous\n> processing - especially when processes are CPU bound. Since PostgreSQL\n> associates each back-end with one CPU *core*, you can have a situation\n> where one core is spinning and the others are more-or-less idle. In those\n> cases you may see an improvement by increasing the number of simultaneous\n> processes to somewhere shy of the number of cores.\n>\n>\n>\n>> I was guessed somewhere is lock - but wasn't clear where/why when there\n>> are no updates - just inserts...\n>>\n>> But I haven't know that during INSERT is done row lock on refferenced\n>> tables as well - from FK columns...\n>>\n>> So I guess now it is cause of the problem...\n>>\n>> We will see how it goes with insert into unlogged tables with no FK...\n>>\n>>\n> It will almost certainly go faster as you have eliminated integrity and\n> data-safety. This may be acceptable to you (non-real-time crunching of data\n> that can be reloaded from external sources or temporary processing that is\n> ultimately written back to durable storage) but it doesn't mean you have\n> identified the actual cause.\n>\n> One thing you didn't state. Is all this processing taking place in\n> PostgreSQL? (i.e. update foo set bar = do_the_math(baz, zap, boom)) where\n> do_the_math is a PL/pgSQL, PL/Python, ... or are external processes\n> involved?\n>\n> Cheers,\n> Steve\n>\n>\n\nThanks Steve,Of course I thought under the limits... I haven't thought there are that kind of problems(CPU/Memory/io) because of there are no degradation during long running process - on other sides... i.e. some complex query - run when long running process is off and run it when long runing process is under the go - takes similar time etc... (and that query uses as well tables involved in long runing do_the_math_ function - but of course dont ask at all for potential rows what will long runining functin produce - but would not get it anyway even asks...)\nall the processing is under postgres... (but no updates at all - that would point me directly to potential row_lock problem...)To process one record - is again deeper sequential processing thing with lot if/else etc...\nSomething like:GetMasterInfo about RecordID (join several settings table related to input RecordID)If that RecordID is that type then\n apply_callculation1(recordID)else apply_calculation2(recordID) and so on...then for exapmple apply_calculation1 says:\nget all records for this recordID between related period... (From tracking tables)for each day... take status from that day... calculate hours what match different time periods during the day, and use different rate for each - but again that rate - in some cases depends on Total hours spent in the week that day belongs for that record_id etc etc...\nso basicaly insert in result_table1 - splited amounts by category for each day applying different calculations for each category...Then later sum things from result_table1 - insert_them in result_table2... and do again further calculations based on info in resut_table2 and insert results in the same table...\nAll that math for 1 thing - last 0.5 to 2secs - depending on lot of things etc,,,sleep(1) - was just simplified thing to spent required time for processing... not to help about hardware limits and bandwith :)\njust the fact we can run complex query during long processing function is under run - said me there are no hardware resource problems...\nMany thanks,Misa \n2013/3/12 Steve Crawford <[email protected]>\nOn 03/12/2013 08:06 AM, Misa Simic wrote:\n\nThanks Steve\n\nWell, the full story is too complex - but point was - whatever blackbox does - it last 0.5 to 2secs per 1 processed record (maybe I was wrong but I thought the reason why it takes the time how much it needs to actually do the task -CPU/IO/memory whatever is not that important....) - so I really don't see difference between: call web service, insert row in the table (takes 3 secs) and sleep 3 seconds - insert result in the table...\n\nif we do above task for two things sequential - it will last 6 secs...but if we do it \"concurentelly\" - it should last 3 secs... (in theory :) )\n\n\nNot at all - even in \"theory.\" Sleep involves little, if any, contention for resources. Real processing does. So if a process requires 100% of available CPU then one process gets it all while many running simultaneously will have to share the available CPU resource and thus each will take longer to complete. Or, if you prefer, think of a file download. If it takes an hour to download a 1GB file it doesn't mean that you can download two 1GB files concurrently in one hour even if \"simulating\" the process by a sleep(3600) suggests it is possible.\n\nI should note, however, that depending on the resource that is limiting your speed there is often room for optimization through simultaneous processing - especially when processes are CPU bound. Since PostgreSQL associates each back-end with one CPU *core*, you can have a situation where one core is spinning and the others are more-or-less idle. In those cases you may see an improvement by increasing the number of simultaneous processes to somewhere shy of the number of cores.\n\n\n\n\nI was guessed somewhere is lock - but wasn't clear where/why when there are no updates - just inserts...\n\nBut I haven't know that during INSERT is done row lock on refferenced tables as well - from FK columns...\n\nSo I guess now it is cause of the problem...\n\nWe will see how it goes with insert into unlogged tables with no FK...\n\n\n\nIt will almost certainly go faster as you have eliminated integrity and data-safety. This may be acceptable to you (non-real-time crunching of data that can be reloaded from external sources or temporary processing that is ultimately written back to durable storage) but it doesn't mean you have identified the actual cause.\n\nOne thing you didn't state. Is all this processing taking place in PostgreSQL? (i.e. update foo set bar = do_the_math(baz, zap, boom)) where do_the_math is a PL/pgSQL, PL/Python, ... or are external processes involved?\n\nCheers,\nSteve",
"msg_date": "Tue, 12 Mar 2013 18:11:46 +0100",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow concurrent processing"
},
{
"msg_contents": ":(\n\nAh - 9.1.0 is postgres version on Ubuntu...\n\nThanks Jeff - you saved me some time - reorganising functions to work with\ndifferent tables would take time... what potentially will not give us\nsolution :(\n\nMany thanks,\n\nMisa\n\n\n2013/3/12 Jeff Janes <[email protected]>\n\n> On Tue, Mar 12, 2013 at 7:13 AM, Misa Simic <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> Researching deeply my problem with concurrent processing i have found:\n>>\n>>\n>> http://postgresql.1045698.n5.nabble.com/WHY-transaction-waits-for-another-transaction-tp2142627p2142630.html\n>>\n>>\n>> \"The more likely suspect is a foreign key conflict.\n>> Are both transactions inserting/updating rows that could reference\n>> the same row(s) in a master table?\" - Tom Lane\n>>\n>> This is exactly the case (in my case) - several connections tries to\n>> insert rows in the same table... but some columns are referenced to\n>> settings tables... and there is possibility that two rows what we want to\n>> insert reference the same row in settings table...\n>>\n>\n> Unless you are running an ancient version of PostgreSQL (<8.1), this would\n> no longer pose a problem.\n>\n> Cheers,\n>\n> Jeff\n>\n\n:(Ah - 9.1.0 is postgres version on Ubuntu...Thanks Jeff - you saved me some time - reorganising functions to work with different tables would take time... what potentially will not give us solution :(\nMany thanks,Misa2013/3/12 Jeff Janes <[email protected]>\nOn Tue, Mar 12, 2013 at 7:13 AM, Misa Simic <[email protected]> wrote:\n\nHi,Researching deeply my problem with concurrent processing i have found:http://postgresql.1045698.n5.nabble.com/WHY-transaction-waits-for-another-transaction-tp2142627p2142630.html\n\"The more likely suspect is a foreign key conflict. \nAre both transactions inserting/updating rows that could reference \nthe same row(s) in a master table?\" - Tom LaneThis is exactly the case (in my case) - several connections tries to insert rows in the same table... but some columns are referenced to settings tables... and there is possibility that two rows what we want to insert reference the same row in settings table...\nUnless you are running an ancient version of PostgreSQL (<8.1), this would no longer pose a problem.Cheers,Jeff",
"msg_date": "Tue, 12 Mar 2013 18:16:16 +0100",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow concurrent processing"
}
] |
[
{
"msg_contents": "I'm considering the following setup:\n\n- Master server with battery back raid controller with 4 SAS disks in a RAID 0 - so NO mirroring here, due to max performance requirements.\n- Slave server setup with streaming replication on 4 HDD's in RAID 10. The setup will be done with synchronous_commit=off and synchronous_standby_names = ''\n\nSo as you might have noticed, clearly there is a risk of data loss, which is acceptable, since our data is not very crucial. However, I have quite a hard time figuring out, if there is a risk of total data corruption across both server in this setup? E.g. something goes wrong on the master and the wal files gets corrupt. Will the slave then apply the wal files INCLUDING the corruption (e.g. an unfinished transaction etc.), or will it automatically stop restoring at the point just BEFORE the corruption, so my only loss is data AFTER the corruption?\n\nHope my question is clear\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 16:24:03 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Risk of data corruption/loss?"
},
{
"msg_contents": "On Wed, Mar 13, 2013 at 8:24 AM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n> I'm considering the following setup:\n>\n> - Master server with battery back raid controller with 4 SAS disks in a\n> RAID 0 - so NO mirroring here, due to max performance requirements.\n> - Slave server setup with streaming replication on 4 HDD's in RAID 10. The\n> setup will be done with synchronous_commit=off and\n> synchronous_standby_names = ''\n>\n\nOut of curiosity, in the presence of BB controller, is\nsynchronous_commit=off getting you additional performance?\n\n\n> So as you might have noticed, clearly there is a risk of data loss, which\n> is acceptable, since our data is not very crucial. However, I have quite a\n> hard time figuring out, if there is a risk of total data corruption across\n> both server in this setup? E.g. something goes wrong on the master and the\n> wal files gets corrupt. Will the slave then apply the wal files INCLUDING\n> the corruption (e.g. an unfinished transaction etc.), or will it\n> automatically stop restoring at the point just BEFORE the corruption, so my\n> only loss is data AFTER the corruption?\n>\n\nIt depends on where the corruption happens. WAL is checksummed, so the\nslave will detect a mismatch and stop applying records. However, if the\ncorruption happens in RAM before the checksum is taken, the checksum will\nmatch and it will attempt to apply the records.\n\nCheers,\n\nJeff\n\nOn Wed, Mar 13, 2013 at 8:24 AM, Niels Kristian Schjødt <[email protected]> wrote:\nI'm considering the following setup:\n\n- Master server with battery back raid controller with 4 SAS disks in a RAID 0 - so NO mirroring here, due to max performance requirements.\n- Slave server setup with streaming replication on 4 HDD's in RAID 10. The setup will be done with synchronous_commit=off and synchronous_standby_names = ''Out of curiosity, in the presence of BB controller, is synchronous_commit=off getting you additional performance?\n\n\nSo as you might have noticed, clearly there is a risk of data loss, which is acceptable, since our data is not very crucial. However, I have quite a hard time figuring out, if there is a risk of total data corruption across both server in this setup? E.g. something goes wrong on the master and the wal files gets corrupt. Will the slave then apply the wal files INCLUDING the corruption (e.g. an unfinished transaction etc.), or will it automatically stop restoring at the point just BEFORE the corruption, so my only loss is data AFTER the corruption?\nIt depends on where the corruption happens. WAL is checksummed, so the slave will detect a mismatch and stop applying records. However, if the corruption happens in RAM before the checksum is taken, the checksum will match and it will attempt to apply the records.\nCheers,Jeff",
"msg_date": "Wed, 13 Mar 2013 10:13:28 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Risk of data corruption/loss?"
},
{
"msg_contents": "Den 13/03/2013 kl. 18.13 skrev Jeff Janes <[email protected]>:\n\n> On Wed, Mar 13, 2013 at 8:24 AM, Niels Kristian Schjødt <[email protected]> wrote:\n> I'm considering the following setup:\n> \n> - Master server with battery back raid controller with 4 SAS disks in a RAID 0 - so NO mirroring here, due to max performance requirements.\n> - Slave server setup with streaming replication on 4 HDD's in RAID 10. The setup will be done with synchronous_commit=off and synchronous_standby_names = ''\n> \n> Out of curiosity, in the presence of BB controller, is synchronous_commit=off getting you additional performance?\n\nTime will show :-)\n> \n> \n> So as you might have noticed, clearly there is a risk of data loss, which is acceptable, since our data is not very crucial. However, I have quite a hard time figuring out, if there is a risk of total data corruption across both server in this setup? E.g. something goes wrong on the master and the wal files gets corrupt. Will the slave then apply the wal files INCLUDING the corruption (e.g. an unfinished transaction etc.), or will it automatically stop restoring at the point just BEFORE the corruption, so my only loss is data AFTER the corruption?\n> \n> It depends on where the corruption happens. WAL is checksummed, so the slave will detect a mismatch and stop applying records. However, if the corruption happens in RAM before the checksum is taken, the checksum will match and it will attempt to apply the records.\n> \n> Cheers,\n> \n> Jeff\n\n\nDen 13/03/2013 kl. 18.13 skrev Jeff Janes <[email protected]>:On Wed, Mar 13, 2013 at 8:24 AM, Niels Kristian Schjødt <[email protected]> wrote:\nI'm considering the following setup:\n\n- Master server with battery back raid controller with 4 SAS disks in a RAID 0 - so NO mirroring here, due to max performance requirements.\n- Slave server setup with streaming replication on 4 HDD's in RAID 10. The setup will be done with synchronous_commit=off and synchronous_standby_names = ''Out of curiosity, in the presence of BB controller, is synchronous_commit=off getting you additional performance?Time will show :-)\n\nSo as you might have noticed, clearly there is a risk of data loss, which is acceptable, since our data is not very crucial. However, I have quite a hard time figuring out, if there is a risk of total data corruption across both server in this setup? E.g. something goes wrong on the master and the wal files gets corrupt. Will the slave then apply the wal files INCLUDING the corruption (e.g. an unfinished transaction etc.), or will it automatically stop restoring at the point just BEFORE the corruption, so my only loss is data AFTER the corruption?\nIt depends on where the corruption happens. WAL is checksummed, so the slave will detect a mismatch and stop applying records. However, if the corruption happens in RAM before the checksum is taken, the checksum will match and it will attempt to apply the records.\nCheers,Jeff",
"msg_date": "Wed, 13 Mar 2013 18:34:19 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Risk of data corruption/loss?"
},
{
"msg_contents": "Neils,\n\n> - Master server with battery back raid controller with 4 SAS disks in\n> a RAID 0 - so NO mirroring here, due to max performance\n> requirements.\n> - Slave server setup with streaming replication on 4 HDD's in RAID\n> 10. The setup will be done with synchronous_commit=off and\n> synchronous_standby_names = ''\n\nI'd be concerned that, assuming you're making the master high-risk for performance reasons, that the standby would not keep up.\n \n> So as you might have noticed, clearly there is a risk of data loss,\n> which is acceptable, since our data is not very crucial. However, I\n> have quite a hard time figuring out, if there is a risk of total\n> data corruption across both server in this setup? E.g. something\n> goes wrong on the master and the wal files gets corrupt. Will the\n> slave then apply the wal files INCLUDING the corruption (e.g. an\n> unfinished transaction etc.), or will it automatically stop\n> restoring at the point just BEFORE the corruption, so my only loss\n> is data AFTER the corruption?\n\nWell, in general RAID 1 really just protects you from HDD failure, not more subtle types of corruption which occur onboard an HDD. So from that respect, you haven't increased your chances of data corruption at all; if the master loses a disk, it should just stop operating; a simple check that all WALs are 16MB on the standby would do the rest. I'd be more concerned that you're likely to be yanking and completely rebuilding the master server every 4 or 5 months.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 16:18:38 -0500 (CDT)",
"msg_from": "Joshua Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Risk of data corruption/loss?"
}
] |
[
{
"msg_contents": "I have a server with 32GB ram, one intel E3-1245 and four 15k SAS disks with a BB LSI MegaRaid controller. I wan't the optimal performance for my server, which will be pretty write heavy at times, and less optimized for redundancy, as my data is not very crucial and I will be running a streaming replication along side.\n\nNow what would you prefer:\n\n1) 3 disks in RAID 0 containing PGDATA + 1 containing SYSTEM and WAL\n2) All four in RAID 10 containing both PGDATA, SYSTEM AND WAL\n3) 2 disks in RAID 1 containing PGDATA + 2 disks in RAID 1 containing SYSTEM and WAL\n4) Something different? \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 18:43:16 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Setup of four 15k SAS disk with LSI raid controller"
},
{
"msg_contents": "raid0 tends to linear scaling so 3 of them should give something close to\n300% increased write speed. So i would say 1. but make sure you test your\nconfiguration as soon as you can with bonnie++ or something similar\n\nOn Wed, Mar 13, 2013 at 7:43 PM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n> I have a server with 32GB ram, one intel E3-1245 and four 15k SAS disks\n> with a BB LSI MegaRaid controller. I wan't the optimal performance for my\n> server, which will be pretty write heavy at times, and less optimized for\n> redundancy, as my data is not very crucial and I will be running a\n> streaming replication along side.\n>\n> Now what would you prefer:\n>\n> 1) 3 disks in RAID 0 containing PGDATA + 1 containing SYSTEM and WAL\n> 2) All four in RAID 10 containing both PGDATA, SYSTEM AND WAL\n> 3) 2 disks in RAID 1 containing PGDATA + 2 disks in RAID 1 containing\n> SYSTEM and WAL\n> 4) Something different?\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nraid0 tends to linear scaling so 3 of them should give something close to 300% increased write speed. So i would say 1. but make sure you test your configuration as soon as you can with bonnie++ or something similar\nOn Wed, Mar 13, 2013 at 7:43 PM, Niels Kristian Schjødt <[email protected]> wrote:\nI have a server with 32GB ram, one intel E3-1245 and four 15k SAS disks with a BB LSI MegaRaid controller. I wan't the optimal performance for my server, which will be pretty write heavy at times, and less optimized for redundancy, as my data is not very crucial and I will be running a streaming replication along side.\n\nNow what would you prefer:\n\n1) 3 disks in RAID 0 containing PGDATA + 1 containing SYSTEM and WAL\n2) All four in RAID 10 containing both PGDATA, SYSTEM AND WAL\n3) 2 disks in RAID 1 containing PGDATA + 2 disks in RAID 1 containing SYSTEM and WAL\n4) Something different?\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 13 Mar 2013 20:15:25 +0200",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setup of four 15k SAS disk with LSI raid controller"
},
{
"msg_contents": "Den 13/03/2013 kl. 19.15 skrev Vasilis Ventirozos <[email protected]>:\n\n> raid0 tends to linear scaling so 3 of them should give something close to 300% increased write speed. So i would say 1. but make sure you test your configuration as soon as you can with bonnie++ or something similar\n> \n> On Wed, Mar 13, 2013 at 7:43 PM, Niels Kristian Schjødt <[email protected]> wrote:\n> I have a server with 32GB ram, one intel E3-1245 and four 15k SAS disks with a BB LSI MegaRaid controller. I wan't the optimal performance for my server, which will be pretty write heavy at times, and less optimized for redundancy, as my data is not very crucial and I will be running a streaming replication along side.\n> \n> Now what would you prefer:\n> \n> 1) 3 disks in RAID 0 containing PGDATA + 1 containing SYSTEM and WAL\n> 2) All four in RAID 10 containing both PGDATA, SYSTEM AND WAL\n> 3) 2 disks in RAID 1 containing PGDATA + 2 disks in RAID 1 containing SYSTEM and WAL\n> 4) Something different?\nA 5. option could also be to simply have all 4 disk in a RAID 0 containing all PGDATA, SYSTEM and WAL\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\nDen 13/03/2013 kl. 19.15 skrev Vasilis Ventirozos <[email protected]>:raid0 tends to linear scaling so 3 of them should give something close to 300% increased write speed. So i would say 1. but make sure you test your configuration as soon as you can with bonnie++ or something similar\nOn Wed, Mar 13, 2013 at 7:43 PM, Niels Kristian Schjødt <[email protected]> wrote:\nI have a server with 32GB ram, one intel E3-1245 and four 15k SAS disks with a BB LSI MegaRaid controller. I wan't the optimal performance for my server, which will be pretty write heavy at times, and less optimized for redundancy, as my data is not very crucial and I will be running a streaming replication along side.\n\nNow what would you prefer:\n\n1) 3 disks in RAID 0 containing PGDATA + 1 containing SYSTEM and WAL\n2) All four in RAID 10 containing both PGDATA, SYSTEM AND WAL\n3) 2 disks in RAID 1 containing PGDATA + 2 disks in RAID 1 containing SYSTEM and WAL\n4) Something different?A 5. option could also be to simply have all 4 disk in a RAID 0 containing all PGDATA, SYSTEM and WAL\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 13 Mar 2013 19:37:40 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setup of four 15k SAS disk with LSI raid controller"
},
{
"msg_contents": "Its better to split WAL segments and data just because these two have\ndifferent io requirements and because its easier to measure and tune things\nif you have them on different disks.\n\n Vasilis Ventirozos\n\nOn Wed, Mar 13, 2013 at 8:37 PM, Niels Kristian Schjødt <\[email protected]> wrote:\n\n> Den 13/03/2013 kl. 19.15 skrev Vasilis Ventirozos <[email protected]\n> >:\n>\n> raid0 tends to linear scaling so 3 of them should give something close to\n> 300% increased write speed. So i would say 1. but make sure you test your\n> configuration as soon as you can with bonnie++ or something similar\n>\n> On Wed, Mar 13, 2013 at 7:43 PM, Niels Kristian Schjødt <\n> [email protected]> wrote:\n>\n>> I have a server with 32GB ram, one intel E3-1245 and four 15k SAS disks\n>> with a BB LSI MegaRaid controller. I wan't the optimal performance for my\n>> server, which will be pretty write heavy at times, and less optimized for\n>> redundancy, as my data is not very crucial and I will be running a\n>> streaming replication along side.\n>>\n>> Now what would you prefer:\n>>\n>> 1) 3 disks in RAID 0 containing PGDATA + 1 containing SYSTEM and WAL\n>> 2) All four in RAID 10 containing both PGDATA, SYSTEM AND WAL\n>> 3) 2 disks in RAID 1 containing PGDATA + 2 disks in RAID 1 containing\n>> SYSTEM and WAL\n>> 4) Something different?\n>>\n> A 5. option could also be to simply have all 4 disk in a RAID 0 containing\n> all PGDATA, SYSTEM and WAL\n>\n>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n\nIts better to split WAL segments and data just because these two have different io requirements and because its easier to measure and tune things if you have them on different disks. Vasilis Ventirozos\nOn Wed, Mar 13, 2013 at 8:37 PM, Niels Kristian Schjødt <[email protected]> wrote:\nDen 13/03/2013 kl. 19.15 skrev Vasilis Ventirozos <[email protected]>:\nraid0 tends to linear scaling so 3 of them should give something close to 300% increased write speed. So i would say 1. but make sure you test your configuration as soon as you can with bonnie++ or something similar\nOn Wed, Mar 13, 2013 at 7:43 PM, Niels Kristian Schjødt <[email protected]> wrote:\nI have a server with 32GB ram, one intel E3-1245 and four 15k SAS disks with a BB LSI MegaRaid controller. I wan't the optimal performance for my server, which will be pretty write heavy at times, and less optimized for redundancy, as my data is not very crucial and I will be running a streaming replication along side.\n\nNow what would you prefer:\n\n1) 3 disks in RAID 0 containing PGDATA + 1 containing SYSTEM and WAL\n2) All four in RAID 10 containing both PGDATA, SYSTEM AND WAL\n3) 2 disks in RAID 1 containing PGDATA + 2 disks in RAID 1 containing SYSTEM and WAL\n4) Something different?A 5. option could also be to simply have all 4 disk in a RAID 0 containing all PGDATA, SYSTEM and WAL\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 13 Mar 2013 20:45:40 +0200",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setup of four 15k SAS disk with LSI raid controller"
},
{
"msg_contents": "\nOn 03/13/2013 11:45 AM, Vasilis Ventirozos wrote:\n> Its better to split WAL segments and data just because these two have\n> different io requirements and because its easier to measure and tune\n> things if you have them on different disks.\n\nGenerally speaking you are correct but we are talking about RAID 0 here.\n\nJD\n\n-- \nCommand Prompt, Inc. - http://www.commandprompt.com/\nPostgreSQL Support, Training, Professional Services and Development\nHigh Availability, Oracle Conversion, Postgres-XC\n@cmdpromptinc - 509-416-6579\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 12:01:11 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setup of four 15k SAS disk with LSI raid controller"
},
{
"msg_contents": "\nDen 13/03/2013 kl. 20.01 skrev Joshua D. Drake <[email protected]>:\n\n> \n> On 03/13/2013 11:45 AM, Vasilis Ventirozos wrote:\n>> Its better to split WAL segments and data just because these two have\n>> different io requirements and because its easier to measure and tune\n>> things if you have them on different disks.\n> \n> Generally speaking you are correct but we are talking about RAID 0 here.\nSo your suggestion is?\n> JD\n> \n> -- \n> Command Prompt, Inc. - http://www.commandprompt.com/\n> PostgreSQL Support, Training, Professional Services and Development\n> High Availability, Oracle Conversion, Postgres-XC\n> @cmdpromptinc - 509-416-6579\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 13 Mar 2013 20:26:46 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setup of four 15k SAS disk with LSI raid controller"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nI have PostgreSQL 9.0.12 on Windows. \n\n \n\nI have some simple function:\n\n \n\nCREATE OR REPLACE FUNCTION sfunction() RETURNS BOOL AS\n\n$BODY$\n\nDECLARE\n\nq TEXT;\n\nr RECORD;\n\nBEGIN\n\n q='SELECT 1 from tb_klient LIMIT 0';\n\n \n\n FOR r IN EXECUTE q\n\n LOOP\n\n END LOOP;\n\n RETURN NULL;\n\n \n\nRETURN NULL;\n\nEND;\n\n$BODY$\n\nLANGUAGE 'plpgsql';\n\n \n\n \n\nAnd some simple Query:\n\n \n\n \n\nexplain analyze SELECT sfunction() AS value\n\nFROM (\n\nSELECT 5604913 AS id ,5666 AS idtowmag \n\n) AS c \n\nLEFT OUTER JOIN tg_tm AS tm ON (tm.ttm_idtowmag=c.idtowmag);\n\n \n\nWhen I run this query explain analyze is:\n\n \n\nSubquery Scan on a (cost=0.00..0.27 rows=1 width=8) (actual\ntime=24.041..24.042 rows=1 loops=1)\n\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002\nrows=1 loops=1)\n\n\"Total runtime: 24.068 ms\"\n\n \n\nBut when I change:\n\n1. Table tb_klient to some other table (but not any other - queries\nwith some tables are still slow) or\n\n2. \"FOR r IN EXECUTE q\"\nchange to\n\"FOR r IN SELECT 1 from tb_klient LIMIT 0\" or\n\n3. add \"LEFT OUTER JOIN tb_klient AS kl ON\n(kl.k_idklienta=c.idtowmag)\" to query\n\n \n\nExplain analyze of query is:\n\n\"Subquery Scan on a (cost=0.00..0.27 rows=1 width=8) (actual\ntime=1.868..1.869 rows=1 loops=1)\"\n\n\" -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.002\nrows=1 loops=1)\"\n\n\"Total runtime: 1.894 ms\"\n\n \n\nExplain analyze of \"SELECT 1 from tb_klient LIMIT 0\" is:\n\n \n\n\"Limit (cost=0.00..0.13 rows=1 width=0) (actual time=0.001..0.001 rows=0\nloops=1)\"\n\n\" -> Seq Scan on tb_klient (cost=0.00..854.23 rows=6823 width=0) (never\nexecuted)\"\n\n\"Total runtime: 0.025 ms\"\n\n \n\ntb_klient has 8200 rows and 77 cols.\n\n \n\nWhy speed of executing (or planning) some very simple query from string in\npl/pgsql is dependent from whole query or why \"FOR r IN EXECUTE q\" is\nsignifically slower from \"FOR r IN query\"?\n\n \n\n \n\n-------------------------------------------\n\nArtur Zajac\n\n \n\n \n\n\nHi, I have PostgreSQL 9.0.12 on Windows. I have some simple function: CREATE OR REPLACE FUNCTION sfunction() RETURNS BOOL AS$BODY$DECLAREq TEXT;r RECORD;BEGIN q='SELECT 1 from tb_klient LIMIT 0'; FOR r IN EXECUTE q LOOP END LOOP; RETURN NULL; RETURN NULL;END;$BODY$LANGUAGE 'plpgsql'; And some simple Query: explain analyze SELECT sfunction() AS valueFROM (SELECT 5604913 AS id ,5666 AS idtowmag ) AS c LEFT OUTER JOIN tg_tm AS tm ON (tm.ttm_idtowmag=c.idtowmag); When I run this query explain analyze is: Subquery Scan on a (cost=0.00..0.27 rows=1 width=8) (actual time=24.041..24.042 rows=1 loops=1) -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002 rows=1 loops=1)\"Total runtime: 24.068 ms\" But when I change:1. Table tb_klient to some other table (but not any other – queries with some tables are still slow) or2. “FOR r IN EXECUTE q”change to“FOR r IN SELECT 1 from tb_klient LIMIT 0” or3. add “LEFT OUTER JOIN tb_klient AS kl ON (kl.k_idklienta=c.idtowmag)” to query Explain analyze of query is:\"Subquery Scan on a (cost=0.00..0.27 rows=1 width=8) (actual time=1.868..1.869 rows=1 loops=1)\"\" -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.002 rows=1 loops=1)\"\"Total runtime: 1.894 ms\" Explain analyze of “SELECT 1 from tb_klient LIMIT 0” is: \"Limit (cost=0.00..0.13 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1)\"\" -> Seq Scan on tb_klient (cost=0.00..854.23 rows=6823 width=0) (never executed)\"\"Total runtime: 0.025 ms\" tb_klient has 8200 rows and 77 cols. Why speed of executing (or planning) some very simple query from string in pl/pgsql is dependent from whole query or why “FOR r IN EXECUTE q” is significally slower from “FOR r IN query”? -------------------------------------------Artur Zajac",
"msg_date": "Thu, 14 Mar 2013 20:22:07 +0100",
"msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speed of EXCECUTE in PL/PGSQL"
},
{
"msg_contents": "On Thu, Mar 14, 2013 at 2:22 PM, Artur Zając <[email protected]> wrote:\n> Hi,\n>\n>\n>\n> I have PostgreSQL 9.0.12 on Windows.\n>\n>\n>\n> I have some simple function:\n>\n>\n>\n> CREATE OR REPLACE FUNCTION sfunction() RETURNS BOOL AS\n>\n> $BODY$\n>\n> DECLARE\n>\n> q TEXT;\n>\n> r RECORD;\n>\n> BEGIN\n>\n> q='SELECT 1 from tb_klient LIMIT 0';\n>\n>\n>\n> FOR r IN EXECUTE q\n>\n> LOOP\n>\n> END LOOP;\n>\n> RETURN NULL;\n>\n>\n>\n> RETURN NULL;\n>\n> END;\n>\n> $BODY$\n>\n> LANGUAGE 'plpgsql';\n>\n>\n>\n>\n>\n> And some simple Query:\n>\n>\n>\n>\n>\n> explain analyze SELECT sfunction() AS value\n>\n> FROM (\n>\n> SELECT 5604913 AS id ,5666 AS idtowmag\n>\n> ) AS c\n>\n> LEFT OUTER JOIN tg_tm AS tm ON (tm.ttm_idtowmag=c.idtowmag);\n>\n>\n>\n> When I run this query explain analyze is:\n>\n>\n>\n> Subquery Scan on a (cost=0.00..0.27 rows=1 width=8) (actual\n> time=24.041..24.042 rows=1 loops=1)\n>\n> -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002\n> rows=1 loops=1)\n>\n> \"Total runtime: 24.068 ms\"\n>\n>\n>\n> But when I change:\n>\n> 1. Table tb_klient to some other table (but not any other – queries\n> with some tables are still slow) or\n>\n> 2. “FOR r IN EXECUTE q”\n> change to\n> “FOR r IN SELECT 1 from tb_klient LIMIT 0” or\n>\n> 3. add “LEFT OUTER JOIN tb_klient AS kl ON\n> (kl.k_idklienta=c.idtowmag)” to query\n>\n>\n>\n> Explain analyze of query is:\n>\n> \"Subquery Scan on a (cost=0.00..0.27 rows=1 width=8) (actual\n> time=1.868..1.869 rows=1 loops=1)\"\n>\n> \" -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.002\n> rows=1 loops=1)\"\n>\n> \"Total runtime: 1.894 ms\"\n>\n>\n>\n> Explain analyze of “SELECT 1 from tb_klient LIMIT 0” is:\n>\n>\n>\n> \"Limit (cost=0.00..0.13 rows=1 width=0) (actual time=0.001..0.001 rows=0\n> loops=1)\"\n>\n> \" -> Seq Scan on tb_klient (cost=0.00..854.23 rows=6823 width=0) (never\n> executed)\"\n>\n> \"Total runtime: 0.025 ms\"\n>\n>\n>\n> tb_klient has 8200 rows and 77 cols.\n>\n>\n>\n> Why speed of executing (or planning) some very simple query from string in\n> pl/pgsql is dependent from whole query or why “FOR r IN EXECUTE q” is\n> significally slower from “FOR r IN query”?\n\nkinda hard to follow you here. but, it looks like you are adding LIMIT\n0 which makes performance comparison unfair?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Mar 2013 14:36:02 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed of EXCECUTE in PL/PGSQL"
},
{
"msg_contents": "\nOn 03/14/2013 03:22 PM, Artur Zając wrote:\n>\n> Why speed of executing (or planning) some very simple query from \n> string in pl/pgsql is dependent from whole query or why “FOR r IN \n> EXECUTE q” is significally slower from “FOR r IN query”?\n>\n>\n\nThe whole point of EXECUTE is that it's reparsed and planned each time. \nYou should expect it to be quite a bit slower, and avoid using EXECUTE \nwherever possible.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 14 Mar 2013 15:39:25 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed of EXCECUTE in PL/PGSQL"
}
] |
[
{
"msg_contents": "Hi Experts,\n\nI found if we join the master table with other small table, then the\nrunning time is slow. While, if we join each child table with the small\ntable, then it's very fast. Any comments and suggestions are greatly\nappreciated.\n\n*For example, par_list table is small(about 50k rows), while par_est is\nvery large, for each day it's about 400MB. Therefore, we partition it by\nday. However, the query plan for joining the master table with par_list is\nbad, so the running time is slow. The good plan should be join each\npartition table with par_list separately, then aggregate the result\ntogether. *\n*\n*\n*1. Join the master table with a small table. It's slow.*\ndailyest=# explain (analyze on, buffers on)\ndailyest-# SELECT e.date, max(e.estimate)\ndailyest-# FROM\ndailyest-# par_list l,\ndailyest-# par_est e\ndailyest-# WHERE\ndailyest-# l.id = e.list_id and\ndailyest-# e.date BETWEEN '2012-07-08' and '2012-07-10' and\ndailyest-# l.fid = 1 and\ndailyest-# l.sid = 143441 and\ndailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667\ndailyest-# GROUP BY e.date\ndailyest-# ORDER BY e.date;\n\n-----------------------\n GroupAggregate (cost=745326.86..745326.88 rows=1 width=8) (actual\ntime=6281.364..6281.366 rows=3 loops=1)\n Buffers: shared hit=3 read=175869\n -> Sort (cost=745326.86..745326.86 rows=1 width=8) (actual\ntime=6281.358..6281.358 rows=6 loops=1)\n Sort Key: e.date\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=3 read=175869\n -> Nested Loop (cost=0.00..745326.85 rows=1 width=8) (actual\ntime=1228.493..6281.349 rows=6 loops=1)\n Join Filter: (l.id = e.list_id)\n Rows Removed by Join Filter: 4040\n Buffers: shared hit=3 read=175869\n -> Seq Scan on par_list l (cost=0.00..1213.10 rows=2\nwidth=4) (actual time=0.010..38.272 rows=2 loops=1)\n Filter: ((fid = 1) AND (sid = 143441) AND (cid = ANY\n('{36,39,6000}'::integer[])))\n Rows Removed by Filter: 50190\n Buffers: shared hit=3 read=269\n -> Materialize (cost=0.00..744102.56 rows=407 width=12)\n(actual time=9.707..3121.053 rows=2023 loops=2)\n Buffers: shared read=175600\n -> Append (cost=0.00..744100.52 rows=407 width=12)\n(actual time=19.410..6240.044 rows=2023 loops=1)\n Buffers: shared read=175600\n -> Seq Scan on par_est e (cost=0.00..0.00\nrows=1 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND\n(date <= '2012-07-10'::date) AND (aid = 333710667))\n -> Seq Scan on par_est_2012_07 e\n (cost=0.00..0.00 rows=1 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND\n(date <= '2012-07-10'::date) AND (aid = 333710667))\n -> Seq Scan on par_est_2012_07_08 e\n (cost=0.00..247736.09 rows=135 width=12) (actual time=19.408..2088.627\nrows=674 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND\n(date <= '2012-07-10'::date) AND (aid = 333710667))\n Rows Removed by Filter: 10814878\n Buffers: shared read=58463\n -> Seq Scan on par_est_2012_07_09 e\n (cost=0.00..248008.81 rows=137 width=12) (actual time=6.390..1963.238\nrows=676 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND\n(date <= '2012-07-10'::date) AND (aid = 333710667))\n Rows Removed by Filter: 10826866\n Buffers: shared read=58528\n -> Seq Scan on par_est_2012_07_10 e\n (cost=0.00..248355.62 rows=133 width=12) (actual time=15.135..2187.312\nrows=673 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND\n(date <= '2012-07-10'::date) AND (aid = 333710667))\n Rows Removed by Filter: 10841989\n Buffers: shared read=58609\n Total runtime: 6281.444 ms\n(35 rows)\n\n\n*2. Join each partition table with small table (par_list) and union the\nresult. This runs very fast. However, it's not reasonable if we union 180\nSELECT statements (for example, the date is from 2012-07-01 to 2012-12-31.\nAny better suggestions.*\n*\n*\ndailyest=# explain (analyze on, buffers on)\ndailyest-# SELECT e.date, max(e.estimate)\ndailyest-# FROM\ndailyest-# par_list l,\ndailyest-# par_est_2012_07_08 e\ndailyest-# WHERE\ndailyest-# l.id = e.list_id and\ndailyest-# e.date = '2012-07-08' and\ndailyest-# l.fid = 1 and\ndailyest-# l.sid = 143441 and\ndailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667\ndailyest-# GROUP BY e.date\ndailyest-# UNION ALL\ndailyest-# SELECT e.date, max(e.estimate)\ndailyest-# FROM\ndailyest-# par_list l,\ndailyest-# par_est_2012_07_09 e\ndailyest-# WHERE\ndailyest-# l.id = e.list_id and\ndailyest-# e.date = '2012-07-09' and\ndailyest-# l.fid = 1 and\ndailyest-# l.sid = 143441 and\ndailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667\ndailyest-# GROUP BY e.date\ndailyest-# UNION ALL\ndailyest-# SELECT e.date, max(e.estimate)\ndailyest-# FROM\ndailyest-# par_list l,\ndailyest-# par_est_2012_07_10 e\ndailyest-# WHERE\ndailyest-# l.id = e.list_id and\ndailyest-# e.date = '2012-07-10' and\ndailyest-# l.fid = 1 and\ndailyest-# l.sid = 143441 and\ndailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667\ndailyest-# GROUP BY e.date\ndailyest-# ;\n\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------\n Result (cost=0.00..91.49 rows=3 width=8) (actual time=83.736..254.912\nrows=3 loops=1)\n Buffers: shared hit=27 read=28\n -> Append (cost=0.00..91.49 rows=3 width=8) (actual\ntime=83.735..254.910 rows=3 loops=1)\n Buffers: shared hit=27 read=28\n -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual\ntime=83.735..83.735 rows=1 loops=1)\n Buffers: shared hit=9 read=12\n -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual\ntime=63.920..83.728 rows=2 loops=1)\n Buffers: shared hit=9 read=12\n -> Index Scan using par_list_sid_fid_cid_key on\npar_list l (cost=0.00..18.56 rows=2 width=4) (actual time=1.540..1.550\nrows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND\n(cid = ANY ('{36,39,6000}'::integer[])))\n Buffers: shared hit=7 read=4\n -> Index Only Scan using par_est_2012_07_08_pkey on\npar_est_2012_07_08 e (cost=0.00..5.94 rows=1 width=12) (actual\ntime=41.083..41.083 rows=1 loops=2)\n Index Cond: ((date = '2012-07-08'::date) AND\n(list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8\n -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual\ntime=76.911..76.911 rows=1 loops=1)\n Buffers: shared hit=9 read=8\n -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual\ntime=57.580..76.909 rows=2 loops=1)\n Buffers: shared hit=9 read=8\n -> Index Scan using par_list_sid_fid_cid_key on\npar_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.016\nrows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND\n(cid = ANY ('{36,39,6000}'::integer[])))\n Buffers: shared hit=7\n -> Index Only Scan using par_est_2012_07_09_pkey on\npar_est_2012_07_09 e (cost=0.00..5.94 rows=1 width=12) (actual\ntime=38.440..38.442 rows=1 loops=2)\n Index Cond: ((date = '2012-07-09'::date) AND\n(list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8\n -> GroupAggregate (cost=0.00..30.49 rows=1 width=8) (actual\ntime=94.262..94.262 rows=1 loops=1)\n Buffers: shared hit=9 read=8\n -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual\ntime=74.393..94.259 rows=2 loops=1)\n Buffers: shared hit=9 read=8\n -> Index Scan using par_list_sid_fid_cid_key on\npar_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.017\nrows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND\n(cid = ANY ('{36,39,6000}'::integer[])))\n Buffers: shared hit=7\n -> Index Only Scan using par_est_2012_07_10_pkey on\npar_est_2012_07_10 e (cost=0.00..5.95 rows=1 width=12) (actual\ntime=47.116..47.117 rows=1 loops=2)\n Index Cond: ((date = '2012-07-10'::date) AND\n(list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8\n Total runtime: 255.074 ms\n(38 rows)\n\nHi Experts,I found if we join the master table with other small table, then the running time is slow. While, if we join each child table with the small table, then it's very fast. Any comments and suggestions are greatly appreciated.\nFor example, par_list table is small(about 50k rows), while par_est is very large, for each day it's about 400MB. Therefore, we partition it by day. However, the query plan for joining the master table with par_list is bad, so the running time is slow. The good plan should be join each partition table with par_list separately, then aggregate the result together. \n1. Join the master table with a small table. It's slow.dailyest=# explain (analyze on, buffers on) \ndailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM dailyest-# par_list l, \ndailyest-# par_est edailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date BETWEEN '2012-07-08' and '2012-07-10' and dailyest-# l.fid = 1 and\ndailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) anddailyest-# e.aid = 333710667\ndailyest-# GROUP BY e.datedailyest-# ORDER BY e.date;\n----------------------- GroupAggregate (cost=745326.86..745326.88 rows=1 width=8) (actual time=6281.364..6281.366 rows=3 loops=1)\n Buffers: shared hit=3 read=175869 -> Sort (cost=745326.86..745326.86 rows=1 width=8) (actual time=6281.358..6281.358 rows=6 loops=1)\n Sort Key: e.date Sort Method: quicksort Memory: 25kB Buffers: shared hit=3 read=175869\n -> Nested Loop (cost=0.00..745326.85 rows=1 width=8) (actual time=1228.493..6281.349 rows=6 loops=1) Join Filter: (l.id = e.list_id)\n Rows Removed by Join Filter: 4040 Buffers: shared hit=3 read=175869 -> Seq Scan on par_list l (cost=0.00..1213.10 rows=2 width=4) (actual time=0.010..38.272 rows=2 loops=1)\n Filter: ((fid = 1) AND (sid = 143441) AND (cid = ANY ('{36,39,6000}'::integer[]))) Rows Removed by Filter: 50190\n Buffers: shared hit=3 read=269 -> Materialize (cost=0.00..744102.56 rows=407 width=12) (actual time=9.707..3121.053 rows=2023 loops=2)\n Buffers: shared read=175600 -> Append (cost=0.00..744100.52 rows=407 width=12) (actual time=19.410..6240.044 rows=2023 loops=1)\n Buffers: shared read=175600 -> Seq Scan on par_est e (cost=0.00..0.00 rows=1 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) -> Seq Scan on par_est_2012_07 e (cost=0.00..0.00 rows=1 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) -> Seq Scan on par_est_2012_07_08 e (cost=0.00..247736.09 rows=135 width=12) (actual time=19.408..2088.627 rows=674 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10814878\n Buffers: shared read=58463 -> Seq Scan on par_est_2012_07_09 e (cost=0.00..248008.81 rows=137 width=12) (actual time=6.390..1963.238 rows=676 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10826866\n Buffers: shared read=58528 -> Seq Scan on par_est_2012_07_10 e (cost=0.00..248355.62 rows=133 width=12) (actual time=15.135..2187.312 rows=673 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10841989\n Buffers: shared read=58609 Total runtime: 6281.444 ms(35 rows)\n2. Join each partition table with small table (par_list) and union the result. This runs very fast. However, it's not reasonable if we union 180 SELECT statements (for example, the date is from 2012-07-01 to 2012-12-31. Any better suggestions.\ndailyest=# explain (analyze on, buffers on)dailyest-# SELECT e.date, max(e.estimate)\ndailyest-# FROM dailyest-# par_list l, dailyest-# par_est_2012_07_08 e\ndailyest-# WHERE dailyest-# l.id = e.list_id anddailyest-# e.date = '2012-07-08' and \ndailyest-# l.fid = 1 anddailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667dailyest-# GROUP BY e.datedailyest-# UNION ALL\ndailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM dailyest-# par_list l, \ndailyest-# par_est_2012_07_09 edailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date = '2012-07-09' and dailyest-# l.fid = 1 anddailyest-# l.sid = 143441 and\ndailyest-# l.cid in (36, 39, 6000) anddailyest-# e.aid = 333710667dailyest-# GROUP BY e.date\ndailyest-# UNION ALLdailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM \ndailyest-# par_list l, dailyest-# par_est_2012_07_10 edailyest-# WHERE \ndailyest-# l.id = e.list_id anddailyest-# e.date = '2012-07-10' and \ndailyest-# l.fid = 1 anddailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667dailyest-# GROUP BY e.datedailyest-# ;\n QUERY PLAN \n ----------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------ Result (cost=0.00..91.49 rows=3 width=8) (actual time=83.736..254.912 rows=3 loops=1)\n Buffers: shared hit=27 read=28 -> Append (cost=0.00..91.49 rows=3 width=8) (actual time=83.735..254.910 rows=3 loops=1)\n Buffers: shared hit=27 read=28 -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual time=83.735..83.735 rows=1 loops=1)\n Buffers: shared hit=9 read=12 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=63.920..83.728 rows=2 loops=1)\n Buffers: shared hit=9 read=12 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=1.540..1.550 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7 read=4\n -> Index Only Scan using par_est_2012_07_08_pkey on par_est_2012_07_08 e (cost=0.00..5.94 rows=1 width=12) (actual time=41.083..41.083 rows=1 loops=2)\n Index Cond: ((date = '2012-07-08'::date) AND (list_id = l.id) AND (aid = 333710667)) Heap Fetches: 0\n Buffers: shared hit=2 read=8 -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual time=76.911..76.911 rows=1 loops=1)\n Buffers: shared hit=9 read=8 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=57.580..76.909 rows=2 loops=1)\n Buffers: shared hit=9 read=8 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.016 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7\n -> Index Only Scan using par_est_2012_07_09_pkey on par_est_2012_07_09 e (cost=0.00..5.94 rows=1 width=12) (actual time=38.440..38.442 rows=1 loops=2)\n Index Cond: ((date = '2012-07-09'::date) AND (list_id = l.id) AND (aid = 333710667)) Heap Fetches: 0\n Buffers: shared hit=2 read=8 -> GroupAggregate (cost=0.00..30.49 rows=1 width=8) (actual time=94.262..94.262 rows=1 loops=1)\n Buffers: shared hit=9 read=8 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=74.393..94.259 rows=2 loops=1)\n Buffers: shared hit=9 read=8 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.017 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7\n -> Index Only Scan using par_est_2012_07_10_pkey on par_est_2012_07_10 e (cost=0.00..5.95 rows=1 width=12) (actual time=47.116..47.117 rows=1 loops=2)\n Index Cond: ((date = '2012-07-10'::date) AND (list_id = l.id) AND (aid = 333710667)) Heap Fetches: 0\n Buffers: shared hit=2 read=8 Total runtime: 255.074 ms(38 rows)",
"msg_date": "Fri, 15 Mar 2013 23:02:28 +0800",
"msg_from": "Ao Jianwang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Join the master table with other table is very slow (partitioning)"
},
{
"msg_contents": "Hi Rumman,\n\nThanks for your response. I follow the guide to build the partition. The\nsettings should be good. See the following result. Any insight? thanks.\n\ndailyest=# select version();\n version\n\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n(1 row)\n\ndailyest=# show constraint_exclusion;\n constraint_exclusion\n----------------------\n on\n(1 row)\n\nOn Fri, Mar 15, 2013 at 11:04 PM, AI Rumman <[email protected]> wrote:\n\n> Which version of Postgresql are you using?\n> Have you set constraint_exclusion to parition?\n>\n>\n> On Fri, Mar 15, 2013 at 11:02 AM, Ao Jianwang <[email protected]> wrote:\n>\n>> Hi Experts,\n>>\n>> I found if we join the master table with other small table, then the\n>> running time is slow. While, if we join each child table with the small\n>> table, then it's very fast. Any comments and suggestions are greatly\n>> appreciated.\n>>\n>> *For example, par_list table is small(about 50k rows), while par_est is\n>> very large, for each day it's about 400MB. Therefore, we partition it by\n>> day. However, the query plan for joining the master table with par_list is\n>> bad, so the running time is slow. The good plan should be join each\n>> partition table with par_list separately, then aggregate the result\n>> together. *\n>> *\n>> *\n>> *1. Join the master table with a small table. It's slow.*\n>> dailyest=# explain (analyze on, buffers on)\n>> dailyest-# SELECT e.date, max(e.estimate)\n>> dailyest-# FROM\n>> dailyest-# par_list l,\n>> dailyest-# par_est e\n>> dailyest-# WHERE\n>> dailyest-# l.id = e.list_id and\n>> dailyest-# e.date BETWEEN '2012-07-08' and '2012-07-10'\n>> and\n>> dailyest-# l.fid = 1 and\n>> dailyest-# l.sid = 143441 and\n>> dailyest-# l.cid in (36, 39, 6000) and\n>> dailyest-# e.aid = 333710667\n>> dailyest-# GROUP BY e.date\n>> dailyest-# ORDER BY e.date;\n>>\n>> -----------------------\n>> GroupAggregate (cost=745326.86..745326.88 rows=1 width=8) (actual\n>> time=6281.364..6281.366 rows=3 loops=1)\n>> Buffers: shared hit=3 read=175869\n>> -> Sort (cost=745326.86..745326.86 rows=1 width=8) (actual\n>> time=6281.358..6281.358 rows=6 loops=1)\n>> Sort Key: e.date\n>> Sort Method: quicksort Memory: 25kB\n>> Buffers: shared hit=3 read=175869\n>> -> Nested Loop (cost=0.00..745326.85 rows=1 width=8) (actual\n>> time=1228.493..6281.349 rows=6 loops=1)\n>> Join Filter: (l.id = e.list_id)\n>> Rows Removed by Join Filter: 4040\n>> Buffers: shared hit=3 read=175869\n>> -> Seq Scan on par_list l (cost=0.00..1213.10 rows=2\n>> width=4) (actual time=0.010..38.272 rows=2 loops=1)\n>> Filter: ((fid = 1) AND (sid = 143441) AND (cid = ANY\n>> ('{36,39,6000}'::integer[])))\n>> Rows Removed by Filter: 50190\n>> Buffers: shared hit=3 read=269\n>> -> Materialize (cost=0.00..744102.56 rows=407 width=12)\n>> (actual time=9.707..3121.053 rows=2023 loops=2)\n>> Buffers: shared read=175600\n>> -> Append (cost=0.00..744100.52 rows=407 width=12)\n>> (actual time=19.410..6240.044 rows=2023 loops=1)\n>> Buffers: shared read=175600\n>> -> Seq Scan on par_est e (cost=0.00..0.00\n>> rows=1 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n>> Filter: ((date >= '2012-07-08'::date)\n>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>> -> Seq Scan on par_est_2012_07 e\n>> (cost=0.00..0.00 rows=1 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n>> Filter: ((date >= '2012-07-08'::date)\n>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>> -> Seq Scan on par_est_2012_07_08 e\n>> (cost=0.00..247736.09 rows=135 width=12) (actual time=19.408..2088.627\n>> rows=674 loops=1)\n>> Filter: ((date >= '2012-07-08'::date)\n>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>> Rows Removed by Filter: 10814878\n>> Buffers: shared read=58463\n>> -> Seq Scan on par_est_2012_07_09 e\n>> (cost=0.00..248008.81 rows=137 width=12) (actual time=6.390..1963.238\n>> rows=676 loops=1)\n>> Filter: ((date >= '2012-07-08'::date)\n>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>> Rows Removed by Filter: 10826866\n>> Buffers: shared read=58528\n>> -> Seq Scan on par_est_2012_07_10 e\n>> (cost=0.00..248355.62 rows=133 width=12) (actual time=15.135..2187.312\n>> rows=673 loops=1)\n>> Filter: ((date >= '2012-07-08'::date)\n>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>> Rows Removed by Filter: 10841989\n>> Buffers: shared read=58609\n>> Total runtime: 6281.444 ms\n>> (35 rows)\n>>\n>>\n>> *2. Join each partition table with small table (par_list) and union the\n>> result. This runs very fast. However, it's not reasonable if we union 180\n>> SELECT statements (for example, the date is from 2012-07-01 to 2012-12-31.\n>> Any better suggestions.*\n>> *\n>> *\n>> dailyest=# explain (analyze on, buffers on)\n>> dailyest-# SELECT e.date, max(e.estimate)\n>> dailyest-# FROM\n>> dailyest-# par_list l,\n>> dailyest-# par_est_2012_07_08 e\n>> dailyest-# WHERE\n>> dailyest-# l.id = e.list_id and\n>> dailyest-# e.date = '2012-07-08' and\n>> dailyest-# l.fid = 1 and\n>> dailyest-# l.sid = 143441 and\n>> dailyest-# l.cid in (36, 39, 6000) and\n>> dailyest-# e.aid = 333710667\n>> dailyest-# GROUP BY e.date\n>> dailyest-# UNION ALL\n>> dailyest-# SELECT e.date, max(e.estimate)\n>> dailyest-# FROM\n>> dailyest-# par_list l,\n>> dailyest-# par_est_2012_07_09 e\n>> dailyest-# WHERE\n>> dailyest-# l.id = e.list_id and\n>> dailyest-# e.date = '2012-07-09' and\n>> dailyest-# l.fid = 1 and\n>> dailyest-# l.sid = 143441 and\n>> dailyest-# l.cid in (36, 39, 6000) and\n>> dailyest-# e.aid = 333710667\n>> dailyest-# GROUP BY e.date\n>> dailyest-# UNION ALL\n>> dailyest-# SELECT e.date, max(e.estimate)\n>> dailyest-# FROM\n>> dailyest-# par_list l,\n>> dailyest-# par_est_2012_07_10 e\n>> dailyest-# WHERE\n>> dailyest-# l.id = e.list_id and\n>> dailyest-# e.date = '2012-07-10' and\n>> dailyest-# l.fid = 1 and\n>> dailyest-# l.sid = 143441 and\n>> dailyest-# l.cid in (36, 39, 6000) and\n>> dailyest-# e.aid = 333710667\n>> dailyest-# GROUP BY e.date\n>> dailyest-# ;\n>>\n>>\n>> QUERY PLAN\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------\n>> ------------------------------------------------------\n>> Result (cost=0.00..91.49 rows=3 width=8) (actual time=83.736..254.912\n>> rows=3 loops=1)\n>> Buffers: shared hit=27 read=28\n>> -> Append (cost=0.00..91.49 rows=3 width=8) (actual\n>> time=83.735..254.910 rows=3 loops=1)\n>> Buffers: shared hit=27 read=28\n>> -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual\n>> time=83.735..83.735 rows=1 loops=1)\n>> Buffers: shared hit=9 read=12\n>> -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual\n>> time=63.920..83.728 rows=2 loops=1)\n>> Buffers: shared hit=9 read=12\n>> -> Index Scan using par_list_sid_fid_cid_key on\n>> par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=1.540..1.550\n>> rows=2 loops=1)\n>> Index Cond: ((sid = 143441) AND (fid = 1) AND\n>> (cid = ANY ('{36,39,6000}'::integer[])))\n>> Buffers: shared hit=7 read=4\n>> -> Index Only Scan using par_est_2012_07_08_pkey on\n>> par_est_2012_07_08 e (cost=0.00..5.94 rows=1 width=12) (actual time=\n>> 41.083..41.083 rows=1 loops=2)\n>> Index Cond: ((date = '2012-07-08'::date) AND\n>> (list_id = l.id) AND (aid = 333710667))\n>> Heap Fetches: 0\n>> Buffers: shared hit=2 read=8\n>> -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual\n>> time=76.911..76.911 rows=1 loops=1)\n>> Buffers: shared hit=9 read=8\n>> -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual\n>> time=57.580..76.909 rows=2 loops=1)\n>> Buffers: shared hit=9 read=8\n>> -> Index Scan using par_list_sid_fid_cid_key on\n>> par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.016\n>> rows=2 loops=1)\n>> Index Cond: ((sid = 143441) AND (fid = 1) AND\n>> (cid = ANY ('{36,39,6000}'::integer[])))\n>> Buffers: shared hit=7\n>> -> Index Only Scan using par_est_2012_07_09_pkey on\n>> par_est_2012_07_09 e (cost=0.00..5.94 rows=1 width=12) (actual\n>> time=38.440..38.442 rows=1 loops=2)\n>> Index Cond: ((date = '2012-07-09'::date) AND\n>> (list_id = l.id) AND (aid = 333710667))\n>> Heap Fetches: 0\n>> Buffers: shared hit=2 read=8\n>> -> GroupAggregate (cost=0.00..30.49 rows=1 width=8) (actual\n>> time=94.262..94.262 rows=1 loops=1)\n>> Buffers: shared hit=9 read=8\n>> -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual\n>> time=74.393..94.259 rows=2 loops=1)\n>> Buffers: shared hit=9 read=8\n>> -> Index Scan using par_list_sid_fid_cid_key on\n>> par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.017\n>> rows=2 loops=1)\n>> Index Cond: ((sid = 143441) AND (fid = 1) AND\n>> (cid = ANY ('{36,39,6000}'::integer[])))\n>> Buffers: shared hit=7\n>> -> Index Only Scan using par_est_2012_07_10_pkey on\n>> par_est_2012_07_10 e (cost=0.00..5.95 rows=1 width=12) (actual\n>> time=47.116..47.117 rows=1 loops=2)\n>> Index Cond: ((date = '2012-07-10'::date) AND\n>> (list_id = l.id) AND (aid = 333710667))\n>> Heap Fetches: 0\n>> Buffers: shared hit=2 read=8\n>> Total runtime: 255.074 ms\n>> (38 rows)\n>>\n>>\n>\n\nHi Rumman,\nThanks for your response. I follow the guide to build the partition. The settings should be good. See the following result. Any insight? thanks.\n\ndailyest=# select version(); version \n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n(1 row)\ndailyest=# show constraint_exclusion;\n constraint_exclusion \n---------------------- on\n(1 row)On Fri, Mar 15, 2013 at 11:04 PM, AI Rumman <[email protected]> wrote:\nWhich version of Postgresql are you using?Have you set constraint_exclusion to parition?\nOn Fri, Mar 15, 2013 at 11:02 AM, Ao Jianwang <[email protected]> wrote:\nHi Experts,\nI found if we join the master table with other small table, then the running time is slow. While, if we join each child table with the small table, then it's very fast. Any comments and suggestions are greatly appreciated.\nFor example, par_list table is small(about 50k rows), while par_est is very large, for each day it's about 400MB. Therefore, we partition it by day. However, the query plan for joining the master table with par_list is bad, so the running time is slow. The good plan should be join each partition table with par_list separately, then aggregate the result together. \n1. Join the master table with a small table. It's slow.dailyest=# explain (analyze on, buffers on) \ndailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM dailyest-# par_list l, \ndailyest-# par_est edailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date BETWEEN '2012-07-08' and '2012-07-10' and dailyest-# l.fid = 1 and\ndailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) anddailyest-# e.aid = 333710667\ndailyest-# GROUP BY e.datedailyest-# ORDER BY e.date;\n----------------------- GroupAggregate (cost=745326.86..745326.88 rows=1 width=8) (actual time=6281.364..6281.366 rows=3 loops=1)\n Buffers: shared hit=3 read=175869 -> Sort (cost=745326.86..745326.86 rows=1 width=8) (actual time=6281.358..6281.358 rows=6 loops=1)\n Sort Key: e.date Sort Method: quicksort Memory: 25kB Buffers: shared hit=3 read=175869\n -> Nested Loop (cost=0.00..745326.85 rows=1 width=8) (actual time=1228.493..6281.349 rows=6 loops=1) Join Filter: (l.id = e.list_id)\n Rows Removed by Join Filter: 4040 Buffers: shared hit=3 read=175869 -> Seq Scan on par_list l (cost=0.00..1213.10 rows=2 width=4) (actual time=0.010..38.272 rows=2 loops=1)\n Filter: ((fid = 1) AND (sid = 143441) AND (cid = ANY ('{36,39,6000}'::integer[]))) Rows Removed by Filter: 50190\n Buffers: shared hit=3 read=269 -> Materialize (cost=0.00..744102.56 rows=407 width=12) (actual time=9.707..3121.053 rows=2023 loops=2)\n Buffers: shared read=175600 -> Append (cost=0.00..744100.52 rows=407 width=12) (actual time=19.410..6240.044 rows=2023 loops=1)\n Buffers: shared read=175600 -> Seq Scan on par_est e (cost=0.00..0.00 rows=1 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) -> Seq Scan on par_est_2012_07 e (cost=0.00..0.00 rows=1 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) -> Seq Scan on par_est_2012_07_08 e (cost=0.00..247736.09 rows=135 width=12) (actual time=19.408..2088.627 rows=674 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10814878\n Buffers: shared read=58463 -> Seq Scan on par_est_2012_07_09 e (cost=0.00..248008.81 rows=137 width=12) (actual time=6.390..1963.238 rows=676 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10826866\n Buffers: shared read=58528 -> Seq Scan on par_est_2012_07_10 e (cost=0.00..248355.62 rows=133 width=12) (actual time=15.135..2187.312 rows=673 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10841989\n Buffers: shared read=58609 Total runtime: 6281.444 ms(35 rows)\n2. Join each partition table with small table (par_list) and union the result. This runs very fast. However, it's not reasonable if we union 180 SELECT statements (for example, the date is from 2012-07-01 to 2012-12-31. Any better suggestions.\ndailyest=# explain (analyze on, buffers on)dailyest-# SELECT e.date, max(e.estimate)\ndailyest-# FROM dailyest-# par_list l, dailyest-# par_est_2012_07_08 e\ndailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date = '2012-07-08' and \ndailyest-# l.fid = 1 anddailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667dailyest-# GROUP BY e.datedailyest-# UNION ALL\ndailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM dailyest-# par_list l, \ndailyest-# par_est_2012_07_09 edailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date = '2012-07-09' and dailyest-# l.fid = 1 anddailyest-# l.sid = 143441 and\ndailyest-# l.cid in (36, 39, 6000) anddailyest-# e.aid = 333710667dailyest-# GROUP BY e.date\ndailyest-# UNION ALLdailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM \ndailyest-# par_list l, dailyest-# par_est_2012_07_10 edailyest-# WHERE \ndailyest-# l.id = e.list_id anddailyest-# e.date = '2012-07-10' and \ndailyest-# l.fid = 1 anddailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667dailyest-# GROUP BY e.datedailyest-# ;\n QUERY PLAN \n ----------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------ Result (cost=0.00..91.49 rows=3 width=8) (actual time=83.736..254.912 rows=3 loops=1)\n Buffers: shared hit=27 read=28 -> Append (cost=0.00..91.49 rows=3 width=8) (actual time=83.735..254.910 rows=3 loops=1)\n Buffers: shared hit=27 read=28 -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual time=83.735..83.735 rows=1 loops=1)\n Buffers: shared hit=9 read=12 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=63.920..83.728 rows=2 loops=1)\n Buffers: shared hit=9 read=12 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=1.540..1.550 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7 read=4\n -> Index Only Scan using par_est_2012_07_08_pkey on par_est_2012_07_08 e (cost=0.00..5.94 rows=1 width=12) (actual time=41.083..41.083 rows=1 loops=2)\n Index Cond: ((date = '2012-07-08'::date) AND (list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8 -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual time=76.911..76.911 rows=1 loops=1)\n Buffers: shared hit=9 read=8 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=57.580..76.909 rows=2 loops=1)\n Buffers: shared hit=9 read=8 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.016 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7\n -> Index Only Scan using par_est_2012_07_09_pkey on par_est_2012_07_09 e (cost=0.00..5.94 rows=1 width=12) (actual time=38.440..38.442 rows=1 loops=2)\n Index Cond: ((date = '2012-07-09'::date) AND (list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8 -> GroupAggregate (cost=0.00..30.49 rows=1 width=8) (actual time=94.262..94.262 rows=1 loops=1)\n Buffers: shared hit=9 read=8 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=74.393..94.259 rows=2 loops=1)\n Buffers: shared hit=9 read=8 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.017 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7\n -> Index Only Scan using par_est_2012_07_10_pkey on par_est_2012_07_10 e (cost=0.00..5.95 rows=1 width=12) (actual time=47.116..47.117 rows=1 loops=2)\n Index Cond: ((date = '2012-07-10'::date) AND (list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8 Total runtime: 255.074 ms(38 rows)",
"msg_date": "Fri, 15 Mar 2013 23:09:31 +0800",
"msg_from": "Ao Jianwang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join the master table with other table is very slow\n (partitioning)"
},
{
"msg_contents": "On Fri, Mar 15, 2013 at 11:09 AM, Ao Jianwang <[email protected]> wrote:\n\n> Hi Rumman,\n>\n> Thanks for your response. I follow the guide to build the partition. The\n> settings should be good. See the following result. Any insight? thanks.\n>\n> dailyest=# select version();\n> version\n>\n>\n> ------------------------------------------------------------------------------------------------------------\n> PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc\n> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n> (1 row)\n>\n> dailyest=# show constraint_exclusion;\n> constraint_exclusion\n> ----------------------\n> on\n> (1 row)\n>\n> On Fri, Mar 15, 2013 at 11:04 PM, AI Rumman <[email protected]> wrote:\n>\n>> Which version of Postgresql are you using?\n>> Have you set constraint_exclusion to parition?\n>>\n>>\n>> On Fri, Mar 15, 2013 at 11:02 AM, Ao Jianwang <[email protected]> wrote:\n>>\n>>> Hi Experts,\n>>>\n>>> I found if we join the master table with other small table, then the\n>>> running time is slow. While, if we join each child table with the small\n>>> table, then it's very fast. Any comments and suggestions are greatly\n>>> appreciated.\n>>>\n>>> *For example, par_list table is small(about 50k rows), while par_est is\n>>> very large, for each day it's about 400MB. Therefore, we partition it by\n>>> day. However, the query plan for joining the master table with par_list is\n>>> bad, so the running time is slow. The good plan should be join each\n>>> partition table with par_list separately, then aggregate the result\n>>> together. *\n>>> *\n>>> *\n>>> *1. Join the master table with a small table. It's slow.*\n>>> dailyest=# explain (analyze on, buffers on)\n>>> dailyest-# SELECT e.date, max(e.estimate)\n>>> dailyest-# FROM\n>>> dailyest-# par_list l,\n>>> dailyest-# par_est e\n>>> dailyest-# WHERE\n>>> dailyest-# l.id = e.list_id and\n>>> dailyest-# e.date BETWEEN '2012-07-08' and '2012-07-10'\n>>> and\n>>> dailyest-# l.fid = 1 and\n>>> dailyest-# l.sid = 143441 and\n>>> dailyest-# l.cid in (36, 39, 6000) and\n>>> dailyest-# e.aid = 333710667\n>>> dailyest-# GROUP BY e.date\n>>> dailyest-# ORDER BY e.date;\n>>>\n>>> -----------------------\n>>> GroupAggregate (cost=745326.86..745326.88 rows=1 width=8) (actual\n>>> time=6281.364..6281.366 rows=3 loops=1)\n>>> Buffers: shared hit=3 read=175869\n>>> -> Sort (cost=745326.86..745326.86 rows=1 width=8) (actual\n>>> time=6281.358..6281.358 rows=6 loops=1)\n>>> Sort Key: e.date\n>>> Sort Method: quicksort Memory: 25kB\n>>> Buffers: shared hit=3 read=175869\n>>> -> Nested Loop (cost=0.00..745326.85 rows=1 width=8) (actual\n>>> time=1228.493..6281.349 rows=6 loops=1)\n>>> Join Filter: (l.id = e.list_id)\n>>> Rows Removed by Join Filter: 4040\n>>> Buffers: shared hit=3 read=175869\n>>> -> Seq Scan on par_list l (cost=0.00..1213.10 rows=2\n>>> width=4) (actual time=0.010..38.272 rows=2 loops=1)\n>>> Filter: ((fid = 1) AND (sid = 143441) AND (cid =\n>>> ANY ('{36,39,6000}'::integer[])))\n>>> Rows Removed by Filter: 50190\n>>> Buffers: shared hit=3 read=269\n>>> -> Materialize (cost=0.00..744102.56 rows=407 width=12)\n>>> (actual time=9.707..3121.053 rows=2023 loops=2)\n>>> Buffers: shared read=175600\n>>> -> Append (cost=0.00..744100.52 rows=407\n>>> width=12) (actual time=19.410..6240.044 rows=2023 loops=1)\n>>> Buffers: shared read=175600\n>>> -> Seq Scan on par_est e (cost=0.00..0.00\n>>> rows=1 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n>>> Filter: ((date >= '2012-07-08'::date)\n>>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>>> -> Seq Scan on par_est_2012_07 e\n>>> (cost=0.00..0.00 rows=1 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n>>> Filter: ((date >= '2012-07-08'::date)\n>>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>>> -> Seq Scan on par_est_2012_07_08 e\n>>> (cost=0.00..247736.09 rows=135 width=12) (actual time=19.408..2088.627\n>>> rows=674 loops=1)\n>>> Filter: ((date >= '2012-07-08'::date)\n>>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>>> Rows Removed by Filter: 10814878\n>>> Buffers: shared read=58463\n>>> -> Seq Scan on par_est_2012_07_09 e\n>>> (cost=0.00..248008.81 rows=137 width=12) (actual time=6.390..1963.238\n>>> rows=676 loops=1)\n>>> Filter: ((date >= '2012-07-08'::date)\n>>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>>> Rows Removed by Filter: 10826866\n>>> Buffers: shared read=58528\n>>> -> Seq Scan on par_est_2012_07_10 e\n>>> (cost=0.00..248355.62 rows=133 width=12) (actual time=15.135..2187.312\n>>> rows=673 loops=1)\n>>> Filter: ((date >= '2012-07-08'::date)\n>>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>>> Rows Removed by Filter: 10841989\n>>> Buffers: shared read=58609\n>>> Total runtime: 6281.444 ms\n>>> (35 rows)\n>>>\n>>>\n>>> *2. Join each partition table with small table (par_list) and union the\n>>> result. This runs very fast. However, it's not reasonable if we union 180\n>>> SELECT statements (for example, the date is from 2012-07-01 to 2012-12-31.\n>>> Any better suggestions.*\n>>> *\n>>> *\n>>> dailyest=# explain (analyze on, buffers on)\n>>> dailyest-# SELECT e.date, max(e.estimate)\n>>> dailyest-# FROM\n>>> dailyest-# par_list l,\n>>> dailyest-# par_est_2012_07_08 e\n>>> dailyest-# WHERE\n>>> dailyest-# l.id = e.list_id and\n>>> dailyest-# e.date = '2012-07-08' and\n>>> dailyest-# l.fid = 1 and\n>>> dailyest-# l.sid = 143441 and\n>>> dailyest-# l.cid in (36, 39, 6000) and\n>>> dailyest-# e.aid = 333710667\n>>> dailyest-# GROUP BY e.date\n>>> dailyest-# UNION ALL\n>>> dailyest-# SELECT e.date, max(e.estimate)\n>>> dailyest-# FROM\n>>> dailyest-# par_list l,\n>>> dailyest-# par_est_2012_07_09 e\n>>> dailyest-# WHERE\n>>> dailyest-# l.id = e.list_id and\n>>> dailyest-# e.date = '2012-07-09' and\n>>> dailyest-# l.fid = 1 and\n>>> dailyest-# l.sid = 143441 and\n>>> dailyest-# l.cid in (36, 39, 6000) and\n>>> dailyest-# e.aid = 333710667\n>>> dailyest-# GROUP BY e.date\n>>> dailyest-# UNION ALL\n>>> dailyest-# SELECT e.date, max(e.estimate)\n>>> dailyest-# FROM\n>>> dailyest-# par_list l,\n>>> dailyest-# par_est_2012_07_10 e\n>>> dailyest-# WHERE\n>>> dailyest-# l.id = e.list_id and\n>>> dailyest-# e.date = '2012-07-10' and\n>>> dailyest-# l.fid = 1 and\n>>> dailyest-# l.sid = 143441 and\n>>> dailyest-# l.cid in (36, 39, 6000) and\n>>> dailyest-# e.aid = 333710667\n>>> dailyest-# GROUP BY e.date\n>>> dailyest-# ;\n>>>\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------\n>>> ------------------------------------------------------\n>>> Result (cost=0.00..91.49 rows=3 width=8) (actual time=83.736..254.912\n>>> rows=3 loops=1)\n>>> Buffers: shared hit=27 read=28\n>>> -> Append (cost=0.00..91.49 rows=3 width=8) (actual\n>>> time=83.735..254.910 rows=3 loops=1)\n>>> Buffers: shared hit=27 read=28\n>>> -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual\n>>> time=83.735..83.735 rows=1 loops=1)\n>>> Buffers: shared hit=9 read=12\n>>> -> Nested Loop (cost=0.00..30.47 rows=1 width=8)\n>>> (actual time=63.920..83.728 rows=2 loops=1)\n>>> Buffers: shared hit=9 read=12\n>>> -> Index Scan using par_list_sid_fid_cid_key on\n>>> par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=1.540..1.550\n>>> rows=2 loops=1)\n>>> Index Cond: ((sid = 143441) AND (fid = 1) AND\n>>> (cid = ANY ('{36,39,6000}'::integer[])))\n>>> Buffers: shared hit=7 read=4\n>>> -> Index Only Scan using par_est_2012_07_08_pkey\n>>> on par_est_2012_07_08 e (cost=0.00..5.94 rows=1 width=12) (actual time=\n>>> 41.083..41.083 rows=1 loops=2)\n>>> Index Cond: ((date = '2012-07-08'::date) AND\n>>> (list_id = l.id) AND (aid = 333710667))\n>>> Heap Fetches: 0\n>>> Buffers: shared hit=2 read=8\n>>> -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual\n>>> time=76.911..76.911 rows=1 loops=1)\n>>> Buffers: shared hit=9 read=8\n>>> -> Nested Loop (cost=0.00..30.47 rows=1 width=8)\n>>> (actual time=57.580..76.909 rows=2 loops=1)\n>>> Buffers: shared hit=9 read=8\n>>> -> Index Scan using par_list_sid_fid_cid_key on\n>>> par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.016\n>>> rows=2 loops=1)\n>>> Index Cond: ((sid = 143441) AND (fid = 1) AND\n>>> (cid = ANY ('{36,39,6000}'::integer[])))\n>>> Buffers: shared hit=7\n>>> -> Index Only Scan using par_est_2012_07_09_pkey\n>>> on par_est_2012_07_09 e (cost=0.00..5.94 rows=1 width=12) (actual\n>>> time=38.440..38.442 rows=1 loops=2)\n>>> Index Cond: ((date = '2012-07-09'::date) AND\n>>> (list_id = l.id) AND (aid = 333710667))\n>>> Heap Fetches: 0\n>>> Buffers: shared hit=2 read=8\n>>> -> GroupAggregate (cost=0.00..30.49 rows=1 width=8) (actual\n>>> time=94.262..94.262 rows=1 loops=1)\n>>> Buffers: shared hit=9 read=8\n>>> -> Nested Loop (cost=0.00..30.47 rows=1 width=8)\n>>> (actual time=74.393..94.259 rows=2 loops=1)\n>>> Buffers: shared hit=9 read=8\n>>> -> Index Scan using par_list_sid_fid_cid_key on\n>>> par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.017\n>>> rows=2 loops=1)\n>>> Index Cond: ((sid = 143441) AND (fid = 1) AND\n>>> (cid = ANY ('{36,39,6000}'::integer[])))\n>>> Buffers: shared hit=7\n>>> -> Index Only Scan using par_est_2012_07_10_pkey\n>>> on par_est_2012_07_10 e (cost=0.00..5.95 rows=1 width=12) (actual\n>>> time=47.116..47.117 rows=1 loops=2)\n>>> Index Cond: ((date = '2012-07-10'::date) AND\n>>> (list_id = l.id) AND (aid = 333710667))\n>>> Heap Fetches: 0\n>>> Buffers: shared hit=2 read=8\n>>> Total runtime: 255.074 ms\n>>> (38 rows)\n>>>\n>>>\n>>\n> At first. you may try the following out and find out if the partition\nconstraint exclusion is working or not::\n\nexplain\nselect *\nFROM\npar_est e\nWHERE\ne.date BETWEEN '2012-07-08' and '2012-07-10'\n\nOn Fri, Mar 15, 2013 at 11:09 AM, Ao Jianwang <[email protected]> wrote:\nHi Rumman,\nThanks for your response. I follow the guide to build the partition. The settings should be good. See the following result. Any insight? thanks.\n\n\ndailyest=# select version(); version \n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n(1 row)\ndailyest=# show constraint_exclusion;\n constraint_exclusion \n---------------------- on\n(1 row)On Fri, Mar 15, 2013 at 11:04 PM, AI Rumman <[email protected]> wrote:\nWhich version of Postgresql are you using?Have you set constraint_exclusion to parition?\n\nOn Fri, Mar 15, 2013 at 11:02 AM, Ao Jianwang <[email protected]> wrote:\nHi Experts,\nI found if we join the master table with other small table, then the running time is slow. While, if we join each child table with the small table, then it's very fast. Any comments and suggestions are greatly appreciated.\nFor example, par_list table is small(about 50k rows), while par_est is very large, for each day it's about 400MB. Therefore, we partition it by day. However, the query plan for joining the master table with par_list is bad, so the running time is slow. The good plan should be join each partition table with par_list separately, then aggregate the result together. \n1. Join the master table with a small table. It's slow.dailyest=# explain (analyze on, buffers on) \ndailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM dailyest-# par_list l, \ndailyest-# par_est edailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date BETWEEN '2012-07-08' and '2012-07-10' and dailyest-# l.fid = 1 and\ndailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) anddailyest-# e.aid = 333710667\ndailyest-# GROUP BY e.datedailyest-# ORDER BY e.date;\n----------------------- GroupAggregate (cost=745326.86..745326.88 rows=1 width=8) (actual time=6281.364..6281.366 rows=3 loops=1)\n Buffers: shared hit=3 read=175869 -> Sort (cost=745326.86..745326.86 rows=1 width=8) (actual time=6281.358..6281.358 rows=6 loops=1)\n Sort Key: e.date Sort Method: quicksort Memory: 25kB Buffers: shared hit=3 read=175869\n -> Nested Loop (cost=0.00..745326.85 rows=1 width=8) (actual time=1228.493..6281.349 rows=6 loops=1) Join Filter: (l.id = e.list_id)\n Rows Removed by Join Filter: 4040 Buffers: shared hit=3 read=175869 -> Seq Scan on par_list l (cost=0.00..1213.10 rows=2 width=4) (actual time=0.010..38.272 rows=2 loops=1)\n Filter: ((fid = 1) AND (sid = 143441) AND (cid = ANY ('{36,39,6000}'::integer[]))) Rows Removed by Filter: 50190\n Buffers: shared hit=3 read=269 -> Materialize (cost=0.00..744102.56 rows=407 width=12) (actual time=9.707..3121.053 rows=2023 loops=2)\n Buffers: shared read=175600 -> Append (cost=0.00..744100.52 rows=407 width=12) (actual time=19.410..6240.044 rows=2023 loops=1)\n Buffers: shared read=175600 -> Seq Scan on par_est e (cost=0.00..0.00 rows=1 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) -> Seq Scan on par_est_2012_07 e (cost=0.00..0.00 rows=1 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) -> Seq Scan on par_est_2012_07_08 e (cost=0.00..247736.09 rows=135 width=12) (actual time=19.408..2088.627 rows=674 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10814878\n Buffers: shared read=58463 -> Seq Scan on par_est_2012_07_09 e (cost=0.00..248008.81 rows=137 width=12) (actual time=6.390..1963.238 rows=676 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10826866\n Buffers: shared read=58528 -> Seq Scan on par_est_2012_07_10 e (cost=0.00..248355.62 rows=133 width=12) (actual time=15.135..2187.312 rows=673 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10841989\n Buffers: shared read=58609 Total runtime: 6281.444 ms(35 rows)\n2. Join each partition table with small table (par_list) and union the result. This runs very fast. However, it's not reasonable if we union 180 SELECT statements (for example, the date is from 2012-07-01 to 2012-12-31. Any better suggestions.\ndailyest=# explain (analyze on, buffers on)dailyest-# SELECT e.date, max(e.estimate)\ndailyest-# FROM dailyest-# par_list l, dailyest-# par_est_2012_07_08 e\ndailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date = '2012-07-08' and \ndailyest-# l.fid = 1 anddailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667dailyest-# GROUP BY e.datedailyest-# UNION ALL\ndailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM dailyest-# par_list l, \ndailyest-# par_est_2012_07_09 edailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date = '2012-07-09' and dailyest-# l.fid = 1 anddailyest-# l.sid = 143441 and\ndailyest-# l.cid in (36, 39, 6000) anddailyest-# e.aid = 333710667dailyest-# GROUP BY e.date\ndailyest-# UNION ALLdailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM \ndailyest-# par_list l, dailyest-# par_est_2012_07_10 edailyest-# WHERE \ndailyest-# l.id = e.list_id anddailyest-# e.date = '2012-07-10' and \ndailyest-# l.fid = 1 anddailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667dailyest-# GROUP BY e.datedailyest-# ;\n QUERY PLAN \n ----------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------ Result (cost=0.00..91.49 rows=3 width=8) (actual time=83.736..254.912 rows=3 loops=1)\n Buffers: shared hit=27 read=28 -> Append (cost=0.00..91.49 rows=3 width=8) (actual time=83.735..254.910 rows=3 loops=1)\n Buffers: shared hit=27 read=28 -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual time=83.735..83.735 rows=1 loops=1)\n Buffers: shared hit=9 read=12 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=63.920..83.728 rows=2 loops=1)\n Buffers: shared hit=9 read=12 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=1.540..1.550 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7 read=4\n -> Index Only Scan using par_est_2012_07_08_pkey on par_est_2012_07_08 e (cost=0.00..5.94 rows=1 width=12) (actual time=41.083..41.083 rows=1 loops=2)\n Index Cond: ((date = '2012-07-08'::date) AND (list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8 -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual time=76.911..76.911 rows=1 loops=1)\n Buffers: shared hit=9 read=8 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=57.580..76.909 rows=2 loops=1)\n Buffers: shared hit=9 read=8 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.016 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7\n -> Index Only Scan using par_est_2012_07_09_pkey on par_est_2012_07_09 e (cost=0.00..5.94 rows=1 width=12) (actual time=38.440..38.442 rows=1 loops=2)\n Index Cond: ((date = '2012-07-09'::date) AND (list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8 -> GroupAggregate (cost=0.00..30.49 rows=1 width=8) (actual time=94.262..94.262 rows=1 loops=1)\n Buffers: shared hit=9 read=8 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=74.393..94.259 rows=2 loops=1)\n Buffers: shared hit=9 read=8 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.017 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7\n -> Index Only Scan using par_est_2012_07_10_pkey on par_est_2012_07_10 e (cost=0.00..5.95 rows=1 width=12) (actual time=47.116..47.117 rows=1 loops=2)\n Index Cond: ((date = '2012-07-10'::date) AND (list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8 Total runtime: 255.074 ms(38 rows)\n\n\n\nAt first. you may try the following out and find out if the partition constraint exclusion is working or not::explainselect * FROM par_est e\nWHERE e.date BETWEEN '2012-07-08' and '2012-07-10'",
"msg_date": "Fri, 15 Mar 2013 11:12:26 -0400",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join the master table with other table is very slow\n (partitioning)"
},
{
"msg_contents": "Hi Rumman,\n\nI think it works. Please see the following result. Thanks.\n\ndailyest=# explain select * from par_est e where e.date BETWEEN\n'2012-07-08' and '2012-07-10'\n;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------\n Result (cost=0.00..662886.68 rows=32485781 width=16)\n -> Append (cost=0.00..662886.68 rows=32485781 width=16)\n -> Seq Scan on par_est e (cost=0.00..0.00 rows=1 width=16)\n Filter: ((date >= '2012-07-08'::date) AND (date <=\n'2012-07-10'::date))\n -> Seq Scan on par_est_2012_07 e (cost=0.00..0.00 rows=1\nwidth=16)\n Filter: ((date >= '2012-07-08'::date) AND (date <=\n'2012-07-10'::date))\n -> Seq Scan on par_est_2012_07_08 e (cost=0.00..220695.53\nrows=10815502 width=16)\n Filter: ((date >= '2012-07-08'::date) AND (date <=\n'2012-07-10'::date))\n -> Seq Scan on par_est_2012_07_09 e (cost=0.00..220942.20\nrows=10827613 width=16)\n Filter: ((date >= '2012-07-08'::date) AND (date <=\n'2012-07-10'::date))\n -> Seq Scan on par_est_2012_07_10 e (cost=0.00..221248.96\nrows=10842664 width=16)\n Filter: ((date >= '2012-07-08'::date) AND (date <=\n'2012-07-10'::date))\n(12 rows)\n\n\n\n\nOn Fri, Mar 15, 2013 at 11:12 PM, AI Rumman <[email protected]> wrote:\n\n>\n>\n> On Fri, Mar 15, 2013 at 11:09 AM, Ao Jianwang <[email protected]> wrote:\n>\n>> Hi Rumman,\n>>\n>> Thanks for your response. I follow the guide to build the partition. The\n>> settings should be good. See the following result. Any insight? thanks.\n>>\n>> dailyest=# select version();\n>> version\n>>\n>>\n>> ------------------------------------------------------------------------------------------------------------\n>> PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc\n>> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n>> (1 row)\n>>\n>> dailyest=# show constraint_exclusion;\n>> constraint_exclusion\n>> ----------------------\n>> on\n>> (1 row)\n>>\n>> On Fri, Mar 15, 2013 at 11:04 PM, AI Rumman <[email protected]> wrote:\n>>\n>>> Which version of Postgresql are you using?\n>>> Have you set constraint_exclusion to parition?\n>>>\n>>>\n>>> On Fri, Mar 15, 2013 at 11:02 AM, Ao Jianwang <[email protected]>wrote:\n>>>\n>>>> Hi Experts,\n>>>>\n>>>> I found if we join the master table with other small table, then the\n>>>> running time is slow. While, if we join each child table with the small\n>>>> table, then it's very fast. Any comments and suggestions are greatly\n>>>> appreciated.\n>>>>\n>>>> *For example, par_list table is small(about 50k rows), while par_est\n>>>> is very large, for each day it's about 400MB. Therefore, we partition it by\n>>>> day. However, the query plan for joining the master table with par_list is\n>>>> bad, so the running time is slow. The good plan should be join each\n>>>> partition table with par_list separately, then aggregate the result\n>>>> together. *\n>>>> *\n>>>> *\n>>>> *1. Join the master table with a small table. It's slow.*\n>>>> dailyest=# explain (analyze on, buffers on)\n>>>> dailyest-# SELECT e.date, max(e.estimate)\n>>>> dailyest-# FROM\n>>>> dailyest-# par_list l,\n>>>> dailyest-# par_est e\n>>>> dailyest-# WHERE\n>>>> dailyest-# l.id = e.list_id and\n>>>> dailyest-# e.date BETWEEN '2012-07-08' and '2012-07-10'\n>>>> and\n>>>> dailyest-# l.fid = 1 and\n>>>> dailyest-# l.sid = 143441 and\n>>>> dailyest-# l.cid in (36, 39, 6000) and\n>>>> dailyest-# e.aid = 333710667\n>>>> dailyest-# GROUP BY e.date\n>>>> dailyest-# ORDER BY e.date;\n>>>>\n>>>> -----------------------\n>>>> GroupAggregate (cost=745326.86..745326.88 rows=1 width=8) (actual\n>>>> time=6281.364..6281.366 rows=3 loops=1)\n>>>> Buffers: shared hit=3 read=175869\n>>>> -> Sort (cost=745326.86..745326.86 rows=1 width=8) (actual\n>>>> time=6281.358..6281.358 rows=6 loops=1)\n>>>> Sort Key: e.date\n>>>> Sort Method: quicksort Memory: 25kB\n>>>> Buffers: shared hit=3 read=175869\n>>>> -> Nested Loop (cost=0.00..745326.85 rows=1 width=8) (actual\n>>>> time=1228.493..6281.349 rows=6 loops=1)\n>>>> Join Filter: (l.id = e.list_id)\n>>>> Rows Removed by Join Filter: 4040\n>>>> Buffers: shared hit=3 read=175869\n>>>> -> Seq Scan on par_list l (cost=0.00..1213.10 rows=2\n>>>> width=4) (actual time=0.010..38.272 rows=2 loops=1)\n>>>> Filter: ((fid = 1) AND (sid = 143441) AND (cid =\n>>>> ANY ('{36,39,6000}'::integer[])))\n>>>> Rows Removed by Filter: 50190\n>>>> Buffers: shared hit=3 read=269\n>>>> -> Materialize (cost=0.00..744102.56 rows=407\n>>>> width=12) (actual time=9.707..3121.053 rows=2023 loops=2)\n>>>> Buffers: shared read=175600\n>>>> -> Append (cost=0.00..744100.52 rows=407\n>>>> width=12) (actual time=19.410..6240.044 rows=2023 loops=1)\n>>>> Buffers: shared read=175600\n>>>> -> Seq Scan on par_est e (cost=0.00..0.00\n>>>> rows=1 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n>>>> Filter: ((date >= '2012-07-08'::date)\n>>>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>>>> -> Seq Scan on par_est_2012_07 e\n>>>> (cost=0.00..0.00 rows=1 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n>>>> Filter: ((date >= '2012-07-08'::date)\n>>>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>>>> -> Seq Scan on par_est_2012_07_08 e\n>>>> (cost=0.00..247736.09 rows=135 width=12) (actual time=19.408..2088.627\n>>>> rows=674 loops=1)\n>>>> Filter: ((date >= '2012-07-08'::date)\n>>>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>>>> Rows Removed by Filter: 10814878\n>>>> Buffers: shared read=58463\n>>>> -> Seq Scan on par_est_2012_07_09 e\n>>>> (cost=0.00..248008.81 rows=137 width=12) (actual time=6.390..1963.238\n>>>> rows=676 loops=1)\n>>>> Filter: ((date >= '2012-07-08'::date)\n>>>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>>>> Rows Removed by Filter: 10826866\n>>>> Buffers: shared read=58528\n>>>> -> Seq Scan on par_est_2012_07_10 e\n>>>> (cost=0.00..248355.62 rows=133 width=12) (actual time=15.135..2187.312\n>>>> rows=673 loops=1)\n>>>> Filter: ((date >= '2012-07-08'::date)\n>>>> AND (date <= '2012-07-10'::date) AND (aid = 333710667))\n>>>> Rows Removed by Filter: 10841989\n>>>> Buffers: shared read=58609\n>>>> Total runtime: 6281.444 ms\n>>>> (35 rows)\n>>>>\n>>>>\n>>>> *2. Join each partition table with small table (par_list) and union\n>>>> the result. This runs very fast. However, it's not reasonable if we union\n>>>> 180 SELECT statements (for example, the date is from 2012-07-01 to\n>>>> 2012-12-31. Any better suggestions.*\n>>>> *\n>>>> *\n>>>> dailyest=# explain (analyze on, buffers on)\n>>>> dailyest-# SELECT e.date, max(e.estimate)\n>>>> dailyest-# FROM\n>>>> dailyest-# par_list l,\n>>>> dailyest-# par_est_2012_07_08 e\n>>>> dailyest-# WHERE\n>>>> dailyest-# l.id = e.list_id and\n>>>> dailyest-# e.date = '2012-07-08' and\n>>>> dailyest-# l.fid = 1 and\n>>>> dailyest-# l.sid = 143441 and\n>>>> dailyest-# l.cid in (36, 39, 6000) and\n>>>> dailyest-# e.aid = 333710667\n>>>> dailyest-# GROUP BY e.date\n>>>> dailyest-# UNION ALL\n>>>> dailyest-# SELECT e.date, max(e.estimate)\n>>>> dailyest-# FROM\n>>>> dailyest-# par_list l,\n>>>> dailyest-# par_est_2012_07_09 e\n>>>> dailyest-# WHERE\n>>>> dailyest-# l.id = e.list_id and\n>>>> dailyest-# e.date = '2012-07-09' and\n>>>> dailyest-# l.fid = 1 and\n>>>> dailyest-# l.sid = 143441 and\n>>>> dailyest-# l.cid in (36, 39, 6000) and\n>>>> dailyest-# e.aid = 333710667\n>>>> dailyest-# GROUP BY e.date\n>>>> dailyest-# UNION ALL\n>>>> dailyest-# SELECT e.date, max(e.estimate)\n>>>> dailyest-# FROM\n>>>> dailyest-# par_list l,\n>>>> dailyest-# par_est_2012_07_10 e\n>>>> dailyest-# WHERE\n>>>> dailyest-# l.id = e.list_id and\n>>>> dailyest-# e.date = '2012-07-10' and\n>>>> dailyest-# l.fid = 1 and\n>>>> dailyest-# l.sid = 143441 and\n>>>> dailyest-# l.cid in (36, 39, 6000) and\n>>>> dailyest-# e.aid = 333710667\n>>>> dailyest-# GROUP BY e.date\n>>>> dailyest-# ;\n>>>>\n>>>>\n>>>> QUERY PLAN\n>>>>\n>>>>\n>>>> ----------------------------------------------------------------------------------------------------------------------------------------------\n>>>> ------------------------------------------------------\n>>>> Result (cost=0.00..91.49 rows=3 width=8) (actual time=83.736..254.912\n>>>> rows=3 loops=1)\n>>>> Buffers: shared hit=27 read=28\n>>>> -> Append (cost=0.00..91.49 rows=3 width=8) (actual\n>>>> time=83.735..254.910 rows=3 loops=1)\n>>>> Buffers: shared hit=27 read=28\n>>>> -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual\n>>>> time=83.735..83.735 rows=1 loops=1)\n>>>> Buffers: shared hit=9 read=12\n>>>> -> Nested Loop (cost=0.00..30.47 rows=1 width=8)\n>>>> (actual time=63.920..83.728 rows=2 loops=1)\n>>>> Buffers: shared hit=9 read=12\n>>>> -> Index Scan using par_list_sid_fid_cid_key on\n>>>> par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=1.540..1.550\n>>>> rows=2 loops=1)\n>>>> Index Cond: ((sid = 143441) AND (fid = 1)\n>>>> AND (cid = ANY ('{36,39,6000}'::integer[])))\n>>>> Buffers: shared hit=7 read=4\n>>>> -> Index Only Scan using par_est_2012_07_08_pkey\n>>>> on par_est_2012_07_08 e (cost=0.00..5.94 rows=1 width=12) (actual time=\n>>>> 41.083..41.083 rows=1 loops=2)\n>>>> Index Cond: ((date = '2012-07-08'::date) AND\n>>>> (list_id = l.id) AND (aid = 333710667))\n>>>> Heap Fetches: 0\n>>>> Buffers: shared hit=2 read=8\n>>>> -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual\n>>>> time=76.911..76.911 rows=1 loops=1)\n>>>> Buffers: shared hit=9 read=8\n>>>> -> Nested Loop (cost=0.00..30.47 rows=1 width=8)\n>>>> (actual time=57.580..76.909 rows=2 loops=1)\n>>>> Buffers: shared hit=9 read=8\n>>>> -> Index Scan using par_list_sid_fid_cid_key on\n>>>> par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.016\n>>>> rows=2 loops=1)\n>>>> Index Cond: ((sid = 143441) AND (fid = 1)\n>>>> AND (cid = ANY ('{36,39,6000}'::integer[])))\n>>>> Buffers: shared hit=7\n>>>> -> Index Only Scan using par_est_2012_07_09_pkey\n>>>> on par_est_2012_07_09 e (cost=0.00..5.94 rows=1 width=12) (actual\n>>>> time=38.440..38.442 rows=1 loops=2)\n>>>> Index Cond: ((date = '2012-07-09'::date) AND\n>>>> (list_id = l.id) AND (aid = 333710667))\n>>>> Heap Fetches: 0\n>>>> Buffers: shared hit=2 read=8\n>>>> -> GroupAggregate (cost=0.00..30.49 rows=1 width=8) (actual\n>>>> time=94.262..94.262 rows=1 loops=1)\n>>>> Buffers: shared hit=9 read=8\n>>>> -> Nested Loop (cost=0.00..30.47 rows=1 width=8)\n>>>> (actual time=74.393..94.259 rows=2 loops=1)\n>>>> Buffers: shared hit=9 read=8\n>>>> -> Index Scan using par_list_sid_fid_cid_key on\n>>>> par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.017\n>>>> rows=2 loops=1)\n>>>> Index Cond: ((sid = 143441) AND (fid = 1)\n>>>> AND (cid = ANY ('{36,39,6000}'::integer[])))\n>>>> Buffers: shared hit=7\n>>>> -> Index Only Scan using par_est_2012_07_10_pkey\n>>>> on par_est_2012_07_10 e (cost=0.00..5.95 rows=1 width=12) (actual\n>>>> time=47.116..47.117 rows=1 loops=2)\n>>>> Index Cond: ((date = '2012-07-10'::date) AND\n>>>> (list_id = l.id) AND (aid = 333710667))\n>>>> Heap Fetches: 0\n>>>> Buffers: shared hit=2 read=8\n>>>> Total runtime: 255.074 ms\n>>>> (38 rows)\n>>>>\n>>>>\n>>>\n>> At first. you may try the following out and find out if the partition\n> constraint exclusion is working or not::\n>\n> explain\n> select *\n> FROM\n> par_est e\n> WHERE\n> e.date BETWEEN '2012-07-08' and '2012-07-10'\n>\n\nHi Rumman,I think it works. Please see the following result. Thanks.\ndailyest=# explain select * from par_est e where e.date BETWEEN '2012-07-08' and '2012-07-10' \n; QUERY PLAN --------------------------------------------------------------------------------------------------------\n Result (cost=0.00..662886.68 rows=32485781 width=16) -> Append (cost=0.00..662886.68 rows=32485781 width=16)\n -> Seq Scan on par_est e (cost=0.00..0.00 rows=1 width=16) Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date))\n -> Seq Scan on par_est_2012_07 e (cost=0.00..0.00 rows=1 width=16) Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date))\n -> Seq Scan on par_est_2012_07_08 e (cost=0.00..220695.53 rows=10815502 width=16) Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date))\n -> Seq Scan on par_est_2012_07_09 e (cost=0.00..220942.20 rows=10827613 width=16) Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date))\n -> Seq Scan on par_est_2012_07_10 e (cost=0.00..221248.96 rows=10842664 width=16) Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date))\n(12 rows)On Fri, Mar 15, 2013 at 11:12 PM, AI Rumman <[email protected]> wrote:\nOn Fri, Mar 15, 2013 at 11:09 AM, Ao Jianwang <[email protected]> wrote:\n\nHi Rumman,\nThanks for your response. I follow the guide to build the partition. The settings should be good. See the following result. Any insight? thanks.\n\n\ndailyest=# select version(); version \n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n(1 row)\ndailyest=# show constraint_exclusion;\n constraint_exclusion \n---------------------- on\n(1 row)On Fri, Mar 15, 2013 at 11:04 PM, AI Rumman <[email protected]> wrote:\nWhich version of Postgresql are you using?Have you set constraint_exclusion to parition?\n\nOn Fri, Mar 15, 2013 at 11:02 AM, Ao Jianwang <[email protected]> wrote:\nHi Experts,\nI found if we join the master table with other small table, then the running time is slow. While, if we join each child table with the small table, then it's very fast. Any comments and suggestions are greatly appreciated.\nFor example, par_list table is small(about 50k rows), while par_est is very large, for each day it's about 400MB. Therefore, we partition it by day. However, the query plan for joining the master table with par_list is bad, so the running time is slow. The good plan should be join each partition table with par_list separately, then aggregate the result together. \n1. Join the master table with a small table. It's slow.dailyest=# explain (analyze on, buffers on) \ndailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM dailyest-# par_list l, \ndailyest-# par_est edailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date BETWEEN '2012-07-08' and '2012-07-10' and dailyest-# l.fid = 1 and\ndailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) anddailyest-# e.aid = 333710667\ndailyest-# GROUP BY e.datedailyest-# ORDER BY e.date;\n----------------------- GroupAggregate (cost=745326.86..745326.88 rows=1 width=8) (actual time=6281.364..6281.366 rows=3 loops=1)\n Buffers: shared hit=3 read=175869 -> Sort (cost=745326.86..745326.86 rows=1 width=8) (actual time=6281.358..6281.358 rows=6 loops=1)\n Sort Key: e.date Sort Method: quicksort Memory: 25kB Buffers: shared hit=3 read=175869\n -> Nested Loop (cost=0.00..745326.85 rows=1 width=8) (actual time=1228.493..6281.349 rows=6 loops=1) Join Filter: (l.id = e.list_id)\n Rows Removed by Join Filter: 4040 Buffers: shared hit=3 read=175869 -> Seq Scan on par_list l (cost=0.00..1213.10 rows=2 width=4) (actual time=0.010..38.272 rows=2 loops=1)\n Filter: ((fid = 1) AND (sid = 143441) AND (cid = ANY ('{36,39,6000}'::integer[]))) Rows Removed by Filter: 50190\n Buffers: shared hit=3 read=269 -> Materialize (cost=0.00..744102.56 rows=407 width=12) (actual time=9.707..3121.053 rows=2023 loops=2)\n Buffers: shared read=175600 -> Append (cost=0.00..744100.52 rows=407 width=12) (actual time=19.410..6240.044 rows=2023 loops=1)\n Buffers: shared read=175600 -> Seq Scan on par_est e (cost=0.00..0.00 rows=1 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) -> Seq Scan on par_est_2012_07 e (cost=0.00..0.00 rows=1 width=12) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) -> Seq Scan on par_est_2012_07_08 e (cost=0.00..247736.09 rows=135 width=12) (actual time=19.408..2088.627 rows=674 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10814878\n Buffers: shared read=58463 -> Seq Scan on par_est_2012_07_09 e (cost=0.00..248008.81 rows=137 width=12) (actual time=6.390..1963.238 rows=676 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10826866\n Buffers: shared read=58528 -> Seq Scan on par_est_2012_07_10 e (cost=0.00..248355.62 rows=133 width=12) (actual time=15.135..2187.312 rows=673 loops=1)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 333710667)) Rows Removed by Filter: 10841989\n Buffers: shared read=58609 Total runtime: 6281.444 ms(35 rows)\n2. Join each partition table with small table (par_list) and union the result. This runs very fast. However, it's not reasonable if we union 180 SELECT statements (for example, the date is from 2012-07-01 to 2012-12-31. Any better suggestions.\ndailyest=# explain (analyze on, buffers on)dailyest-# SELECT e.date, max(e.estimate)\ndailyest-# FROM dailyest-# par_list l, dailyest-# par_est_2012_07_08 e\ndailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date = '2012-07-08' and \ndailyest-# l.fid = 1 anddailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667dailyest-# GROUP BY e.datedailyest-# UNION ALL\ndailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM dailyest-# par_list l, \ndailyest-# par_est_2012_07_09 edailyest-# WHERE dailyest-# l.id = e.list_id and\ndailyest-# e.date = '2012-07-09' and dailyest-# l.fid = 1 anddailyest-# l.sid = 143441 and\ndailyest-# l.cid in (36, 39, 6000) anddailyest-# e.aid = 333710667dailyest-# GROUP BY e.date\ndailyest-# UNION ALLdailyest-# SELECT e.date, max(e.estimate)dailyest-# FROM \ndailyest-# par_list l, dailyest-# par_est_2012_07_10 edailyest-# WHERE \ndailyest-# l.id = e.list_id anddailyest-# e.date = '2012-07-10' and \ndailyest-# l.fid = 1 anddailyest-# l.sid = 143441 anddailyest-# l.cid in (36, 39, 6000) and\ndailyest-# e.aid = 333710667dailyest-# GROUP BY e.datedailyest-# ;\n QUERY PLAN \n ----------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------ Result (cost=0.00..91.49 rows=3 width=8) (actual time=83.736..254.912 rows=3 loops=1)\n Buffers: shared hit=27 read=28 -> Append (cost=0.00..91.49 rows=3 width=8) (actual time=83.735..254.910 rows=3 loops=1)\n Buffers: shared hit=27 read=28 -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual time=83.735..83.735 rows=1 loops=1)\n Buffers: shared hit=9 read=12 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=63.920..83.728 rows=2 loops=1)\n Buffers: shared hit=9 read=12 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=1.540..1.550 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7 read=4\n -> Index Only Scan using par_est_2012_07_08_pkey on par_est_2012_07_08 e (cost=0.00..5.94 rows=1 width=12) (actual time=41.083..41.083 rows=1 loops=2)\n Index Cond: ((date = '2012-07-08'::date) AND (list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8 -> GroupAggregate (cost=0.00..30.48 rows=1 width=8) (actual time=76.911..76.911 rows=1 loops=1)\n Buffers: shared hit=9 read=8 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=57.580..76.909 rows=2 loops=1)\n Buffers: shared hit=9 read=8 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.016 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7\n -> Index Only Scan using par_est_2012_07_09_pkey on par_est_2012_07_09 e (cost=0.00..5.94 rows=1 width=12) (actual time=38.440..38.442 rows=1 loops=2)\n Index Cond: ((date = '2012-07-09'::date) AND (list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8 -> GroupAggregate (cost=0.00..30.49 rows=1 width=8) (actual time=94.262..94.262 rows=1 loops=1)\n Buffers: shared hit=9 read=8 -> Nested Loop (cost=0.00..30.47 rows=1 width=8) (actual time=74.393..94.259 rows=2 loops=1)\n Buffers: shared hit=9 read=8 -> Index Scan using par_list_sid_fid_cid_key on par_list l (cost=0.00..18.56 rows=2 width=4) (actual time=0.007..0.017 rows=2 loops=1)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[]))) Buffers: shared hit=7\n -> Index Only Scan using par_est_2012_07_10_pkey on par_est_2012_07_10 e (cost=0.00..5.95 rows=1 width=12) (actual time=47.116..47.117 rows=1 loops=2)\n Index Cond: ((date = '2012-07-10'::date) AND (list_id = l.id) AND (aid = 333710667))\n Heap Fetches: 0\n Buffers: shared hit=2 read=8 Total runtime: 255.074 ms(38 rows)\n\n\n\nAt first. you may try the following out and find out if the partition constraint exclusion is working or not::explainselect * FROM \npar_est e\nWHERE e.date BETWEEN '2012-07-08' and '2012-07-10'",
"msg_date": "Fri, 15 Mar 2013 23:17:45 +0800",
"msg_from": "Ao Jianwang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join the master table with other table is very slow\n (partitioning)"
},
{
"msg_contents": "Yes, the index name is par_est_2012_07_09_aid_index on the aid column. The\nplan is as follows. It seems looks better than the old one, since it choose\nthe index scan. However, I don't think it's efficient, since it still\nappend the result from child tables together, then join the small table\n(par_list). I expect each child table will join with the small table, then\naggregate them together as the \"UNION ALL\" did. Any comments. Thanks.\n\nexplain\nselect *\nFROM\npar_est e\nWHERE\ne.date BETWEEN '2012-07-12' and '2012-07-14'\nand e.aid = 310723177\nand exists\n (\n select true\n from par_daily_list l\n where l.id = e.list_id and\n l.fid = 1 and\nl.sid = 143441 and\n l.cid in (36, 39, 6000)\n )\n\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..168.09 rows=1 width=16)\n -> Index Scan using par_daily_list_sid_fid_cid_key on par_daily_list l\n (cost=0.00..18.56 rows=2 width=4)\n Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY\n('{36,39,6000}'::integer[])))\n -> Append (cost=0.00..74.71 rows=5 width=16)\n -> Seq Scan on par_est e (cost=0.00..0.00 rows=1 width=16)\n Filter: ((date >= '2012-07-08'::date) AND (date <=\n'2012-07-10'::date) AND (aid = 310723177) AND (l.id = list_id))\n -> Seq Scan on par_est_2012_07 e (cost=0.00..0.00 rows=1\nwidth=16)\n Filter: ((date >= '2012-07-08'::date) AND (date <=\n'2012-07-10'::date) AND (aid = 310723177) AND (l.id = list_id))\n -> Bitmap Heap Scan on par_est_2012_07_08 e (cost=20.86..24.88\nrows=1 width=16)\n Recheck Cond: ((aid = 310723177) AND (list_id = l.id))\n Filter: ((date >= '2012-07-08'::date) AND (date <=\n'2012-07-10'::date))\n -> BitmapAnd (cost=20.86..20.86 rows=1 width=0)\n -> Bitmap Index Scan on par_est_2012_07_08_aid_index\n (cost=0.00..6.47 rows=138 width=0)\n Index Cond: (aid = 310723177)\n -> Bitmap Index Scan on par_est_2012_07_08_le_index\n (cost=0.00..14.11 rows=623 width=0)\n Index Cond: (list_id = l.id)\n -> Bitmap Heap Scan on par_est_2012_07_09 e (cost=20.94..24.96\nrows=1 width=16)\n Recheck Cond: ((aid = 310723177) AND (list_id = l.id))\n Filter: ((date >= '2012-07-08'::date) AND (date <=\n'2012-07-10'::date))\n -> BitmapAnd (cost=20.94..20.94 rows=1 width=0)\n -> Bitmap Index Scan on par_est_2012_07_09_aid_index\n (cost=0.00..6.44 rows=134 width=0)\n Index Cond: (aid = 310723177)\n -> Bitmap Index Scan on par_est_2012_07_09_le_index\n (cost=0.00..14.22 rows=637 width=0)\n Index Cond: (list_id = l.id)\n -> Bitmap Heap Scan on par_est_2012_07_10 e (cost=20.85..24.87\nrows=1 width=16)\n Recheck Cond: ((aid = 310723177) AND (list_id = l.id))\n Filter: ((date >= '2012-07-08'::date) AND (date <=\n'2012-07-10'::date))\n -> BitmapAnd (cost=20.85..20.85 rows=1 width=0)\n -> Bitmap Index Scan on par_est_2012_07_10_aid_index\n (cost=0.00..6.45 rows=135 width=0)\n Index Cond: (aid = 310723177)\n -> Bitmap Index Scan on par_est_2012_07_10_le_index\n (cost=0.00..14.11 rows=623 width=0)\n Index Cond: (list_id = l.id)\n(32 rows)\n\nYes, the index name is par_est_2012_07_09_aid_index on the aid column. The plan is as follows. It seems looks better than the old one, since it choose the index scan. However, I don't think it's efficient, since it still append the result from child tables together, then join the small table (par_list). I expect each child table will join with the small table, then aggregate them together as the \"UNION ALL\" did. Any comments. Thanks.\nexplainselect * FROM \npar_est eWHERE e.date BETWEEN '2012-07-12' and '2012-07-14' \nand e.aid = 310723177and exists ( select true\n from par_daily_list l where l.id = e.list_id and l.fid = 1 and\nl.sid = 143441 and l.cid in (36, 39, 6000) )\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=0.00..168.09 rows=1 width=16)\n -> Index Scan using par_daily_list_sid_fid_cid_key on par_daily_list l (cost=0.00..18.56 rows=2 width=4) Index Cond: ((sid = 143441) AND (fid = 1) AND (cid = ANY ('{36,39,6000}'::integer[])))\n -> Append (cost=0.00..74.71 rows=5 width=16) -> Seq Scan on par_est e (cost=0.00..0.00 rows=1 width=16)\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 310723177) AND (l.id = list_id))\n -> Seq Scan on par_est_2012_07 e (cost=0.00..0.00 rows=1 width=16) Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date) AND (aid = 310723177) AND (l.id = list_id))\n -> Bitmap Heap Scan on par_est_2012_07_08 e (cost=20.86..24.88 rows=1 width=16) Recheck Cond: ((aid = 310723177) AND (list_id = l.id))\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date)) -> BitmapAnd (cost=20.86..20.86 rows=1 width=0)\n -> Bitmap Index Scan on par_est_2012_07_08_aid_index (cost=0.00..6.47 rows=138 width=0) Index Cond: (aid = 310723177)\n -> Bitmap Index Scan on par_est_2012_07_08_le_index (cost=0.00..14.11 rows=623 width=0) Index Cond: (list_id = l.id)\n -> Bitmap Heap Scan on par_est_2012_07_09 e (cost=20.94..24.96 rows=1 width=16) Recheck Cond: ((aid = 310723177) AND (list_id = l.id))\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date)) -> BitmapAnd (cost=20.94..20.94 rows=1 width=0)\n -> Bitmap Index Scan on par_est_2012_07_09_aid_index (cost=0.00..6.44 rows=134 width=0) Index Cond: (aid = 310723177)\n -> Bitmap Index Scan on par_est_2012_07_09_le_index (cost=0.00..14.22 rows=637 width=0) Index Cond: (list_id = l.id)\n -> Bitmap Heap Scan on par_est_2012_07_10 e (cost=20.85..24.87 rows=1 width=16) Recheck Cond: ((aid = 310723177) AND (list_id = l.id))\n Filter: ((date >= '2012-07-08'::date) AND (date <= '2012-07-10'::date)) -> BitmapAnd (cost=20.85..20.85 rows=1 width=0)\n -> Bitmap Index Scan on par_est_2012_07_10_aid_index (cost=0.00..6.45 rows=135 width=0) Index Cond: (aid = 310723177)\n -> Bitmap Index Scan on par_est_2012_07_10_le_index (cost=0.00..14.11 rows=623 width=0) Index Cond: (list_id = l.id)\n(32 rows)",
"msg_date": "Fri, 15 Mar 2013 23:39:54 +0800",
"msg_from": "Ao Jianwang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join the master table with other table is very slow\n (partitioning)"
},
{
"msg_contents": "Ao Jianwang <[email protected]> writes:\n> I found if we join the master table with other small table, then the\n> running time is slow. While, if we join each child table with the small\n> table, then it's very fast. Any comments and suggestions are greatly\n> appreciated.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nYou haven't shown us table schemas, particularly the index definitions.\nIt looks to me like the partition child tables probably don't have\nindexes that are well adapted to this query. Equality constraints\nshould be on leading columns of the index, but the only index I see\nevidence of in your plans has the date column first. Probably the\nplanner is considering an inner-indexscan plan and rejecting it as\nbeing more expensive than this one, because it would have to scan too\nmuch of the index.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Mar 2013 11:42:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join the master table with other table is very slow\n (partitioning)"
},
{
"msg_contents": "Hi Tom, Rumman\n\nHere I use two levels of partition. That's, par_est is first partitioned by\nmonthly (such as par_est_2012_07, ...), then for each monthly child table,\nwe create the daily partition table (such as par_est_2012_07_01).\nAnd, actually,\nI did some test on that. The result is as follows.\n*1) If postgres can join each child table (such as par_est_2012_07_08) with\nthe small table (par_list), then use par_est_2012_07_08_pkey can let the\npostgres use index only scan (in UNION ALL), which is faster. However,\npostgres doesn't do like that.*\n\ndailyest=# \\d par_est_2012_07_08\nTable \"public.par_est_2012_07_08\"\n Column | Type | Modifiers\n----------+---------+-----------\n list_id | integer | not null\n aid | integer | not null\n estimate | integer | not null\n date | date | not null\nIndexes:\n \"par_est_2012_07_08_pkey\" PRIMARY KEY, btree (date, list_id, aid,\nestimate) CLUSTER\nCheck constraints:\n \"par_est_2012_07_08_date_check\" CHECK (date = '2012-07-12'::date)\n \"par_est_2012_07_date_check\" CHECK (date >= '2012-07-01'::date AND date\n<= '2012-07-31'::date)\nForeign-key constraints:\n \"par_est_2012_07_08_list_id_fk\" FOREIGN KEY (list_id) REFERENCES\npar_list(id)\nInherits: par_est_2012_07\n\ndailyest=# \\d par_list\n\nReferenced by:\n TABLE \"par_est_2012_07_01\" CONSTRAINT \"par_est_2012_07_01_list_id_fk\"\nFOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_02\" CONSTRAINT \"par_est_2012_07_02_list_id_fk\"\nFOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_03\" CONSTRAINT \"par_est_2012_07_03_list_id_fk\"\nFOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_04\" CONSTRAINT \"par_est_2012_07_04_list_id_fk\"\nFOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_05\" CONSTRAINT \"par_est_2012_07_05_list_id_fk\"\nFOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_06\" CONSTRAINT \"par_est_2012_07_06_list_id_fk\"\nFOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_07\" CONSTRAINT \"par_est_2012_07_07_list_id_fk\"\nFOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_08\" CONSTRAINT \"par_est_2012_07_08_list_id_fk\"\nFOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_09\" CONSTRAINT \"par_est_2012_07_09_list_id_fk\"\nFOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_10\" CONSTRAINT \"par_est_2012_07_10_list_id_fk\"\nFOREIGN KEY (list_id) REFERENCES par_list(id)\n\n\n*2) As postgres just append the result from child tables and lastly join\nwith the small table. I change the index of the child table to the\nfollowing. So that the index can be used. However, it's still slower than\nthe \"UNION ALL\" solution. Any comments, thanks.*\ndailyest=# \\d par_est_2012_07_08\nTable \"public.par_est_2012_07_08\"\n Column | Type | Modifiers\n----------+---------+-----------\n list_id | integer | not null\n aid | integer | not null\n estimate | integer | not null\n date | date | not null\nIndexes:\n \"par_est_2012_07_08_aid_index\" btree (aid)\n \"par_est_2012_07_08_le_index\" btree (list_id, estimate) CLUSTER\nCheck constraints:\n \"par_est_2012_07_08_date_check\" CHECK (date = '2012-07-08'::date)\n \"par_est_2012_07_date_check\" CHECK (date >= '2012-07-01'::date AND date\n<= '2012-07-31'::date)\nForeign-key constraints:\n \"par_est_2012_07_08_list_id_fk\" FOREIGN KEY (list_id) REFERENCES\npar_list(id)\nInherits: par_est_2012_07* *\n\nOn Fri, Mar 15, 2013 at 11:42 PM, Tom Lane <[email protected]> wrote:\n\n> Ao Jianwang <[email protected]> writes:\n> > I found if we join the master table with other small table, then the\n> > running time is slow. While, if we join each child table with the small\n> > table, then it's very fast. Any comments and suggestions are greatly\n> > appreciated.\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> You haven't shown us table schemas, particularly the index definitions.\n> It looks to me like the partition child tables probably don't have\n> indexes that are well adapted to this query. Equality constraints\n> should be on leading columns of the index, but the only index I see\n> evidence of in your plans has the date column first. Probably the\n> planner is considering an inner-indexscan plan and rejecting it as\n> being more expensive than this one, because it would have to scan too\n> much of the index.\n>\n> regards, tom lane\n>\n\nHi Tom, RummanHere I use two levels of partition. That's, par_est is first partitioned by monthly (such as par_est_2012_07, ...), then for each monthly child table, we create the daily partition table (such as par_est_2012_07_01). And, actually, I did some test on that. The result is as follows.\n1) If postgres can join each child table (such as par_est_2012_07_08) with the small table (par_list), then use par_est_2012_07_08_pkey can let the postgres use index only scan (in UNION ALL), which is faster. However, postgres doesn't do like that.\ndailyest=# \\d par_est_2012_07_08Table \"public.par_est_2012_07_08\"\n Column | Type | Modifiers ----------+---------+----------- list_id | integer | not null\n aid | integer | not null estimate | integer | not null date | date | not null\nIndexes: \"par_est_2012_07_08_pkey\" PRIMARY KEY, btree (date, list_id, aid, estimate) CLUSTERCheck constraints:\n \"par_est_2012_07_08_date_check\" CHECK (date = '2012-07-12'::date) \"par_est_2012_07_date_check\" CHECK (date >= '2012-07-01'::date AND date <= '2012-07-31'::date)\nForeign-key constraints: \"par_est_2012_07_08_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id)\nInherits: par_est_2012_07dailyest=# \\d par_list\nReferenced by: TABLE \"par_est_2012_07_01\" CONSTRAINT \"par_est_2012_07_01_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_02\" CONSTRAINT \"par_est_2012_07_02_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id) TABLE \"par_est_2012_07_03\" CONSTRAINT \"par_est_2012_07_03_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_04\" CONSTRAINT \"par_est_2012_07_04_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id) TABLE \"par_est_2012_07_05\" CONSTRAINT \"par_est_2012_07_05_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_06\" CONSTRAINT \"par_est_2012_07_06_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id) TABLE \"par_est_2012_07_07\" CONSTRAINT \"par_est_2012_07_07_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_08\" CONSTRAINT \"par_est_2012_07_08_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id) TABLE \"par_est_2012_07_09\" CONSTRAINT \"par_est_2012_07_09_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id)\n TABLE \"par_est_2012_07_10\" CONSTRAINT \"par_est_2012_07_10_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id)\n2) As postgres just append the result from child tables and lastly join with the small table. I change the index of the child table to the following. So that the index can be used. However, it's still slower than the \"UNION ALL\" solution. Any comments, thanks.\ndailyest=# \\d par_est_2012_07_08Table \"public.par_est_2012_07_08\" Column | Type | Modifiers \n----------+---------+----------- list_id | integer | not null aid | integer | not null\n estimate | integer | not null date | date | not nullIndexes:\n \"par_est_2012_07_08_aid_index\" btree (aid) \"par_est_2012_07_08_le_index\" btree (list_id, estimate) CLUSTER\nCheck constraints: \"par_est_2012_07_08_date_check\" CHECK (date = '2012-07-08'::date) \"par_est_2012_07_date_check\" CHECK (date >= '2012-07-01'::date AND date <= '2012-07-31'::date)\nForeign-key constraints: \"par_est_2012_07_08_list_id_fk\" FOREIGN KEY (list_id) REFERENCES par_list(id)\nInherits: par_est_2012_07 On Fri, Mar 15, 2013 at 11:42 PM, Tom Lane <[email protected]> wrote:\nAo Jianwang <[email protected]> writes:\n\n> I found if we join the master table with other small table, then the\n> running time is slow. While, if we join each child table with the small\n> table, then it's very fast. Any comments and suggestions are greatly\n> appreciated.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nYou haven't shown us table schemas, particularly the index definitions.\nIt looks to me like the partition child tables probably don't have\nindexes that are well adapted to this query. Equality constraints\nshould be on leading columns of the index, but the only index I see\nevidence of in your plans has the date column first. Probably the\nplanner is considering an inner-indexscan plan and rejecting it as\nbeing more expensive than this one, because it would have to scan too\nmuch of the index.\n\n regards, tom lane",
"msg_date": "Sat, 16 Mar 2013 00:04:08 +0800",
"msg_from": "Ao Jianwang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join the master table with other table is very slow\n (partitioning)"
}
] |
[
{
"msg_contents": "Does it make sense to pre-sort COPY FROM input to produce long runs of\nincreasing values of an indexed column, or does PostgreSQL perform\nthis optimization on its own?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Mar 2013 18:31:27 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pre-sorting COPY FROM input"
},
{
"msg_contents": "On 15.03.2013 19:31, Florian Weimer wrote:\n> Does it make sense to pre-sort COPY FROM input to produce long runs of\n> increasing values of an indexed column, or does PostgreSQL perform\n> this optimization on its own?\n\nPostgreSQL doesn't do that sort of an optimization itself, so yeah, if \nthe random I/O of index updates is a problem, pre-sorting the input \nshould help.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 15 Mar 2013 20:26:58 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pre-sorting COPY FROM input"
}
] |
[
{
"msg_contents": "After installing my new server I just discovered something that doesn't seem right:\n\nsudo du -h /var/lib/postgresql/9.2/main\n\n4.0K\t/var/lib/postgresql/9.2/main/pg_snapshots\n4.0K\t/var/lib/postgresql/9.2/main/pg_serial\n4.0K\t/var/lib/postgresql/9.2/main/pg_tblspc\n29M\t/var/lib/postgresql/9.2/main/pg_clog\n6.8G\t/var/lib/postgresql/9.2/main/pg_log\n104K\t/var/lib/postgresql/9.2/main/pg_stat_tmp\n81G\t/var/lib/postgresql/9.2/main/base/27132\n6.1M\t/var/lib/postgresql/9.2/main/base/12040\n4.0K\t/var/lib/postgresql/9.2/main/base/pgsql_tmp\n6.0M\t/var/lib/postgresql/9.2/main/base/12035\n6.0M\t/var/lib/postgresql/9.2/main/base/1\n81G\t/var/lib/postgresql/9.2/main/base\n80K\t/var/lib/postgresql/9.2/main/pg_multixact/members\n108K\t/var/lib/postgresql/9.2/main/pg_multixact/offsets\n192K\t/var/lib/postgresql/9.2/main/pg_multixact\n12K\t/var/lib/postgresql/9.2/main/pg_notify\n4.0K\t/var/lib/postgresql/9.2/main/pg_twophase\n160K\t/var/lib/postgresql/9.2/main/pg_subtrans\n752K\t/var/lib/postgresql/9.2/main/pg_xlog/archive_status\n202G\t/var/lib/postgresql/9.2/main/pg_xlog\n496K\t/var/lib/postgresql/9.2/main/global\n289G\t/var/lib/postgresql/9.2/main\n\nAs you can see the pg_xlog folder is 202G, which is more than my entire database - this seems wrong to me, however I have no clue why this would happen.\n\nIn short, this is my postgresql.conf\n\ndata_directory = '/var/lib/postgresql/9.2/main' # use data in another directory\nhba_file = '/etc/postgresql/9.2/main/pg_hba.conf' # host-based authentication file\nident_file = '/etc/postgresql/9.2/main/pg_ident.conf' # ident configuration file\nexternal_pid_file = '/var/run/postgresql/9.2-main.pid' # write an extra PID file\nlisten_addresses = '192.168.0.4, localhost' # what IP address(es) to listen on;\nport = 5432 # (change requires restart)\nmax_connections = 300 # (change requires restart)\nunix_socket_directory = '/var/run/postgresql' # (change requires restart)\nwal_level = hot_standby # minimal, archive, or hot_standby\nsynchronous_commit = on # synchronization level; on, off, or local\ncheckpoint_segments = 100 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 10min # range 30s-1h\ncheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0\narchive_mode = on # allows archiving to be done\narchive_command = 'rsync -a %p [email protected]:/var/lib/postgresql/9.2/wals/%f </dev/null' # command to use to archive a logfile segment\nmax_wal_senders = 1 # max number of walsender processes\nwal_keep_segments = 32 # in logfile segments, 16MB each; 0 disables\nhot_standby = on # \"on\" allows queries during recovery\nlog_line_prefix = '%t ' # special values:\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\ndefault_statistics_target = 100\nmaintenance_work_mem = 1GB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 22GB\nwork_mem = 160MB\nwal_buffers = 4MB\nshared_buffers = 4GB\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 10:14:52 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is my pg_xlog directory so huge?"
},
{
"msg_contents": "On Mon, Mar 18, 2013 at 10:14 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> After installing my new server I just discovered something that doesn't seem right:\n>\n> sudo du -h /var/lib/postgresql/9.2/main\n\n<snip>\n\n> As you can see the pg_xlog folder is 202G, which is more than my entire database - this seems wrong to me, however I have no clue why this would happen.\n\nMy first guess would be that your archive_command is failing - so\ncheck your logs for that. If that command fails, no xlog files will\never be rotated (since it would invalidate your backups).\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 10:26:17 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is my pg_xlog directory so huge?"
},
{
"msg_contents": "Okay, thanks. It' seems you were right! Now I have fixed the issue (it was an ssh key). \nSo I started a: \nSELECT pg_start_backup('backup', true);\n\nAnd when done, I executed a: \nsudo -u postgres rsync -av --exclude postmaster.pid --exclude pg_xlog /var/lib/postgresql/9.2/main/ [email protected]:/var/lib/postgresql/9.2/main/\n\nThen I tried to finish off the backup by doing a:\nSELECT pg_stop_backup();\n\nBut It keeps on telling me: \nWARNING: pg_stop_backup still waiting for all required WAL segments to be archived (480 seconds elapsed)\nHINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\n\nAnd I could see in the log that it's some kind of permission issue. So I canceled it, and started the streaming replication on my slave, and it seems to work fine. However the pg_xlog dir on the master is still HUGE 153G - so how can I get this mess sorted, and cleaned up that directory?\n\n\nDen 18/03/2013 kl. 10.26 skrev Magnus Hagander <[email protected]>:\n\n> On Mon, Mar 18, 2013 at 10:14 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n>> After installing my new server I just discovered something that doesn't seem right:\n>> \n>> sudo du -h /var/lib/postgresql/9.2/main\n> \n> <snip>\n> \n>> As you can see the pg_xlog folder is 202G, which is more than my entire database - this seems wrong to me, however I have no clue why this would happen.\n> \n> My first guess would be that your archive_command is failing - so\n> check your logs for that. If that command fails, no xlog files will\n> ever be rotated (since it would invalidate your backups).\n> \n> -- \n> Magnus Hagander\n> Me: http://www.hagander.net/\n> Work: http://www.redpill-linpro.com/\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 15:08:42 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is my pg_xlog directory so huge?"
},
{
"msg_contents": "On Mon, Mar 18, 2013 at 2:08 PM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> Okay, thanks. It' seems you were right! Now I have fixed the issue (it was an ssh key).\n> So I started a:\n> SELECT pg_start_backup('backup', true);\n>\n> And when done, I executed a:\n> sudo -u postgres rsync -av --exclude postmaster.pid --exclude pg_xlog /var/lib/postgresql/9.2/main/ [email protected]:/var/lib/postgresql/9.2/main/\n>\n> Then I tried to finish off the backup by doing a:\n> SELECT pg_stop_backup();\n>\n> But It keeps on telling me:\n> WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (480 seconds elapsed)\n> HINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\n>\n> And I could see in the log that it's some kind of permission issue. So I canceled it, and started the streaming replication on my slave, and it seems to work fine. However the pg_xlog dir on the master is still HUGE 153G - so how can I get this mess sorted, and cleaned up that directory?\n\nOnce you have your archive_command working, it will transfer all your\nxlog to the archive. Once it is, the xlog directory should\nautomatically clean up fairly quickly.\n\nIf you still have a permissions problem with the archive, you\nobviously need to fix that first.\n\nIf you don't care about your archive you could set your\narchive_command to e.g. /bin/true, and that will make it pretend it\nhas archived the files, and should clean it up quicker. But that will\nmean you have no valid archive and thus no valid backups, until you\nstart over froma new base.\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 14:38:56 +0000",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is my pg_xlog directory so huge?"
},
{
"msg_contents": "Thanks! it worked! :-)\n\n\nDen 18/03/2013 kl. 15.38 skrev Magnus Hagander <[email protected]>:\n\n> On Mon, Mar 18, 2013 at 2:08 PM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n>> Okay, thanks. It' seems you were right! Now I have fixed the issue (it was an ssh key).\n>> So I started a:\n>> SELECT pg_start_backup('backup', true);\n>> \n>> And when done, I executed a:\n>> sudo -u postgres rsync -av --exclude postmaster.pid --exclude pg_xlog /var/lib/postgresql/9.2/main/ [email protected]:/var/lib/postgresql/9.2/main/\n>> \n>> Then I tried to finish off the backup by doing a:\n>> SELECT pg_stop_backup();\n>> \n>> But It keeps on telling me:\n>> WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (480 seconds elapsed)\n>> HINT: Check that your archive_command is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable without all the WAL segments.\n>> \n>> And I could see in the log that it's some kind of permission issue. So I canceled it, and started the streaming replication on my slave, and it seems to work fine. However the pg_xlog dir on the master is still HUGE 153G - so how can I get this mess sorted, and cleaned up that directory?\n> \n> Once you have your archive_command working, it will transfer all your\n> xlog to the archive. Once it is, the xlog directory should\n> automatically clean up fairly quickly.\n> \n> If you still have a permissions problem with the archive, you\n> obviously need to fix that first.\n> \n> If you don't care about your archive you could set your\n> archive_command to e.g. /bin/true, and that will make it pretend it\n> has archived the files, and should clean it up quicker. But that will\n> mean you have no valid archive and thus no valid backups, until you\n> start over froma new base.\n> \n> \n> -- \n> Magnus Hagander\n> Me: http://www.hagander.net/\n> Work: http://www.redpill-linpro.com/\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 16:41:19 +0100",
"msg_from": "=?iso-8859-1?Q?Niels_Kristian_Schj=F8dt?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is my pg_xlog directory so huge?"
}
] |
[
{
"msg_contents": "Hi guys, I am worried about the effective_cache_size.\nI run a 32-bits postgres installation on a machine with 64 bits kernel.\nShould I limit effective_cache_size to a maximum of 2.5gb?\n\nHi guys, I am worried about the effective_cache_size.I run a 32-bits postgres installation on a machine with 64 bits kernel.Should I limit effective_cache_size to a maximum of 2.5gb?",
"msg_date": "Mon, 18 Mar 2013 14:53:39 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "effective_cache_size on 32-bits postgres"
},
{
"msg_contents": "On Mon, Mar 18, 2013 at 10:53 AM, Rodrigo Barboza\n<[email protected]> wrote:\n> Hi guys, I am worried about the effective_cache_size.\n> I run a 32-bits postgres installation on a machine with 64 bits kernel.\n> Should I limit effective_cache_size to a maximum of 2.5gb?\n\nThat variables refers to fs cache, so 32 bit pg should not matter.\nShared buffers and similar variables will be another matter.\n\nWhy the heck are you running 32 bit pg on a 64 bit system? You are\nalmost certainly doing it wrong.\n\n\n\n--\nRob Wultsch\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 11:14:22 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: effective_cache_size on 32-bits postgres"
},
{
"msg_contents": "Hello\n\n2013/3/18 Rodrigo Barboza <[email protected]>:\n> Hi guys, I am worried about the effective_cache_size.\n> I run a 32-bits postgres installation on a machine with 64 bits kernel.\n> Should I limit effective_cache_size to a maximum of 2.5gb?\n\nsure and probably little bit less\n\nRegards\n\nPavel\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 19:15:37 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: effective_cache_size on 32-bits postgres"
},
{
"msg_contents": "Ok, now I'm lost, who is right about the limit? Rob or Pavel?\n\nRob, I know it should be a 64 bit, and it will be soon, but there are good\nreasons for this scenario and it's ok for now.\n\n\nOn Mon, Mar 18, 2013 at 3:15 PM, Pavel Stehule <[email protected]>wrote:\n\n> Hello\n>\n> 2013/3/18 Rodrigo Barboza <[email protected]>:\n> > Hi guys, I am worried about the effective_cache_size.\n> > I run a 32-bits postgres installation on a machine with 64 bits kernel.\n> > Should I limit effective_cache_size to a maximum of 2.5gb?\n>\n> sure and probably little bit less\n>\n> Regards\n>\n> Pavel\n>\n\nOk, now I'm lost, who is right about the limit? Rob or Pavel?Rob, I know it should be a 64 bit, and it will be soon, but there are good reasons for this scenario and it's ok for now.\nOn Mon, Mar 18, 2013 at 3:15 PM, Pavel Stehule <[email protected]> wrote:\nHello\n\n2013/3/18 Rodrigo Barboza <[email protected]>:\n> Hi guys, I am worried about the effective_cache_size.\n> I run a 32-bits postgres installation on a machine with 64 bits kernel.\n> Should I limit effective_cache_size to a maximum of 2.5gb?\n\nsure and probably little bit less\n\nRegards\n\nPavel",
"msg_date": "Mon, 18 Mar 2013 15:18:08 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: effective_cache_size on 32-bits postgres"
},
{
"msg_contents": "2013/3/18 Pavel Stehule <[email protected]>:\n> Hello\n>\n> 2013/3/18 Rodrigo Barboza <[email protected]>:\n>> Hi guys, I am worried about the effective_cache_size.\n>> I run a 32-bits postgres installation on a machine with 64 bits kernel.\n>> Should I limit effective_cache_size to a maximum of 2.5gb?\n>\n> sure and probably little bit less\n\nwrong reply - Rob has true\n\nPavel\n\n>\n> Regards\n>\n> Pavel\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 19:18:37 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: effective_cache_size on 32-bits postgres"
},
{
"msg_contents": "So setting this as half of ram, as suggested in postgres tuning webpage\nshould be safe?\nIt says it is a conservative value...\n\n\nOn Mon, Mar 18, 2013 at 3:18 PM, Pavel Stehule <[email protected]>wrote:\n\n> 2013/3/18 Pavel Stehule <[email protected]>:\n> > Hello\n> >\n> > 2013/3/18 Rodrigo Barboza <[email protected]>:\n> >> Hi guys, I am worried about the effective_cache_size.\n> >> I run a 32-bits postgres installation on a machine with 64 bits kernel.\n> >> Should I limit effective_cache_size to a maximum of 2.5gb?\n> >\n> > sure and probably little bit less\n>\n> wrong reply - Rob has true\n>\n> Pavel\n>\n> >\n> > Regards\n> >\n> > Pavel\n>\n\nSo setting this as half of ram, as suggested in postgres tuning webpage should be safe?It says it is a conservative value...On Mon, Mar 18, 2013 at 3:18 PM, Pavel Stehule <[email protected]> wrote:\n2013/3/18 Pavel Stehule <[email protected]>:\n> Hello\n>\n> 2013/3/18 Rodrigo Barboza <[email protected]>:\n>> Hi guys, I am worried about the effective_cache_size.\n>> I run a 32-bits postgres installation on a machine with 64 bits kernel.\n>> Should I limit effective_cache_size to a maximum of 2.5gb?\n>\n> sure and probably little bit less\n\nwrong reply - Rob has true\n\nPavel\n\n>\n> Regards\n>\n> Pavel",
"msg_date": "Mon, 18 Mar 2013 15:23:56 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: effective_cache_size on 32-bits postgres"
},
{
"msg_contents": "2013/3/18 Rodrigo Barboza <[email protected]>:\n> So setting this as half of ram, as suggested in postgres tuning webpage\n> should be safe?\n> It says it is a conservative value...\n\ndepends how much memory is used as cache ??\n\nit can be a shared_buffers + file system cache\n\nRegards\n\nPavel Stehule\n\n>\n>\n> On Mon, Mar 18, 2013 at 3:18 PM, Pavel Stehule <[email protected]>\n> wrote:\n>>\n>> 2013/3/18 Pavel Stehule <[email protected]>:\n>> > Hello\n>> >\n>> > 2013/3/18 Rodrigo Barboza <[email protected]>:\n>> >> Hi guys, I am worried about the effective_cache_size.\n>> >> I run a 32-bits postgres installation on a machine with 64 bits kernel.\n>> >> Should I limit effective_cache_size to a maximum of 2.5gb?\n>> >\n>> > sure and probably little bit less\n>>\n>> wrong reply - Rob has true\n>>\n>> Pavel\n>>\n>> >\n>> > Regards\n>> >\n>> > Pavel\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 19:33:49 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: effective_cache_size on 32-bits postgres"
},
{
"msg_contents": "Rodrigo Barboza <[email protected]> wrote:\n\n> So setting this as half of ram, as suggested in postgres tuning\n> webpage should be safe?\n\nHalf of RAM is likely to be a very bad setting for any work load. \nIt will tend to result in the highest possible number of pages\nduplicated in PostgreSQL and OS caches, reducing the cache hit\nratio. More commonly given advice is to start at 25% of RAM,\nlimited to 2GB on Windows or 32-bit systems or 8GB otherwise. Try\nincremental adjustments from that point using your actual workload\non you actual hardware to find the \"sweet spot\". Some DW\nenvironments report better performance assigning over 50% of RAM to\nshared_buffers; OLTP loads often need to reduce this to prevent\nperiodic episodes of high latency.\n\n-- \nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 11:47:05 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: effective_cache_size on 32-bits postgres"
},
{
"msg_contents": "2013/3/18 Kevin Grittner <[email protected]>:\n> Rodrigo Barboza <[email protected]> wrote:\n>\n>> So setting this as half of ram, as suggested in postgres tuning\n>> webpage should be safe?\n>\n> Half of RAM is likely to be a very bad setting for any work load.\n> It will tend to result in the highest possible number of pages\n> duplicated in PostgreSQL and OS caches, reducing the cache hit\n> ratio. More commonly given advice is to start at 25% of RAM,\n> limited to 2GB on Windows or 32-bit systems or 8GB otherwise. Try\n> incremental adjustments from that point using your actual workload\n> on you actual hardware to find the \"sweet spot\". Some DW\n> environments report better performance assigning over 50% of RAM to\n> shared_buffers; OLTP loads often need to reduce this to prevent\n> periodic episodes of high latency.\n\nyou are speaking about shared_buffers now.\n\nPavel\n\n>\n> --\n> Kevin Grittner\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 19:50:56 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: effective_cache_size on 32-bits postgres"
},
{
"msg_contents": "On Mon, Mar 18, 2013 at 3:47 PM, Kevin Grittner <[email protected]> wrote:\n> Rodrigo Barboza <[email protected]> wrote:\n>\n>> So setting this as half of ram, as suggested in postgres tuning\n>> webpage should be safe?\n>\n> Half of RAM is likely to be a very bad setting for any work load.\n> It will tend to result in the highest possible number of pages\n> duplicated in PostgreSQL and OS caches, reducing the cache hit\n> ratio. More commonly given advice is to start at 25% of RAM,\n> limited to 2GB on Windows or 32-bit systems or 8GB otherwise. Try\n> incremental adjustments from that point using your actual workload\n> on you actual hardware to find the \"sweet spot\". Some DW\n> environments report better performance assigning over 50% of RAM to\n> shared_buffers; OLTP loads often need to reduce this to prevent\n> periodic episodes of high latency.\n\n\nHe's asking about effective_cache_size. You seem to be talking about\nshared_buffers.\n\nReal question behind this all, is whether the e_c_s GUC is 32-bit on\n32-bit systems. Because if so, it ought to be limited too. If not...\nnot.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 18 Mar 2013 15:51:17 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: effective_cache_size on 32-bits postgres"
},
{
"msg_contents": "Yes, Claudio. You got it.\nBut Rob seems to have already answered the confusion between 32 and 64 bits\nfor effective_cache_size.\nActually I am creating generic configuration based on physical memory.\nSo I wanna be conservative about effective_cache_size. That's why I'm\nfollowing postgres tuning website instructions. If it says it is\nconservative, that's good for me.\n\n\nOn Mon, Mar 18, 2013 at 3:51 PM, Claudio Freire <[email protected]>wrote:\n\n> On Mon, Mar 18, 2013 at 3:47 PM, Kevin Grittner <[email protected]> wrote:\n> > Rodrigo Barboza <[email protected]> wrote:\n> >\n> >> So setting this as half of ram, as suggested in postgres tuning\n> >> webpage should be safe?\n> >\n> > Half of RAM is likely to be a very bad setting for any work load.\n> > It will tend to result in the highest possible number of pages\n> > duplicated in PostgreSQL and OS caches, reducing the cache hit\n> > ratio. More commonly given advice is to start at 25% of RAM,\n> > limited to 2GB on Windows or 32-bit systems or 8GB otherwise. Try\n> > incremental adjustments from that point using your actual workload\n> > on you actual hardware to find the \"sweet spot\". Some DW\n> > environments report better performance assigning over 50% of RAM to\n> > shared_buffers; OLTP loads often need to reduce this to prevent\n> > periodic episodes of high latency.\n>\n>\n> He's asking about effective_cache_size. You seem to be talking about\n> shared_buffers.\n>\n> Real question behind this all, is whether the e_c_s GUC is 32-bit on\n> 32-bit systems. Because if so, it ought to be limited too. If not...\n> not.\n>\n\nYes, Claudio. You got it.But Rob seems to have already answered the confusion between 32 and 64 bits for effective_cache_size.Actually I am creating generic configuration based on physical memory.\nSo I wanna be conservative about effective_cache_size. That's why I'm following postgres tuning website instructions. If it says it is conservative, that's good for me.\nOn Mon, Mar 18, 2013 at 3:51 PM, Claudio Freire <[email protected]> wrote:\nOn Mon, Mar 18, 2013 at 3:47 PM, Kevin Grittner <[email protected]> wrote:\n> Rodrigo Barboza <[email protected]> wrote:\n>\n>> So setting this as half of ram, as suggested in postgres tuning\n>> webpage should be safe?\n>\n> Half of RAM is likely to be a very bad setting for any work load.\n> It will tend to result in the highest possible number of pages\n> duplicated in PostgreSQL and OS caches, reducing the cache hit\n> ratio. More commonly given advice is to start at 25% of RAM,\n> limited to 2GB on Windows or 32-bit systems or 8GB otherwise. Try\n> incremental adjustments from that point using your actual workload\n> on you actual hardware to find the \"sweet spot\". Some DW\n> environments report better performance assigning over 50% of RAM to\n> shared_buffers; OLTP loads often need to reduce this to prevent\n> periodic episodes of high latency.\n\n\nHe's asking about effective_cache_size. You seem to be talking about\nshared_buffers.\n\nReal question behind this all, is whether the e_c_s GUC is 32-bit on\n32-bit systems. Because if so, it ought to be limited too. If not...\nnot.",
"msg_date": "Mon, 18 Mar 2013 15:54:12 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: effective_cache_size on 32-bits postgres"
}
] |
[
{
"msg_contents": "Folks,\n\nI just noticed that if I use a tstzrange for convenience, a standard\nbtree index on a timestamp won't get used for it. Example:\n\ntable a (\n\tid int,\n\tval text,\n\tts timestamptz\n);\nindex a_ts on a(ts);\n\nSELECT * FROM a WHERE ts <@ tstzrange('2013-01-01','2013-01-01 00:10:00')\n\n... will NOT use the index a_ts. Is this something which could be fixed\nfor 9.4?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Mar 2013 17:12:56 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index usage for tstzrange?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> I just noticed that if I use a tstzrange for convenience, a standard\n> btree index on a timestamp won't get used for it. Example:\n\n> table a (\n> \tid int,\n> \tval text,\n> \tts timestamptz\n> );\n> index a_ts on a(ts);\n\n> SELECT * FROM a WHERE ts <@ tstzrange('2013-01-01','2013-01-01 00:10:00')\n\n> ... will NOT use the index a_ts.\n\nWell, no. <@ is not a btree-indexable operator.\n\nWhat I find more disturbing is that this is what I get from the example\nin HEAD:\n\nregression=# explain SELECT * FROM a WHERE ts <@ tstzrange('2013-01-01','2013-01-01 00:10:00');\nERROR: XX000: type 1184 is not a range type\nLOCATION: range_get_typcache, rangetypes.c:1451\n\nHaven't traced through it to determine exactly what's happening, but\nisn't this a legitimate usage? And if it isn't, surely a more\nuser-facing error ought to be getting thrown somewhere upstream of here.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 20 Mar 2013 23:58:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index usage for tstzrange?"
},
{
"msg_contents": "On Thu, Mar 21, 2013 at 5:58 AM, Tom Lane <[email protected]> wrote:\n\n> Josh Berkus <[email protected]> writes:\n> > I just noticed that if I use a tstzrange for convenience, a standard\n> > btree index on a timestamp won't get used for it. Example:\n>\n> > table a (\n> > id int,\n> > val text,\n> > ts timestamptz\n> > );\n> > index a_ts on a(ts);\n>\n> > SELECT * FROM a WHERE ts <@ tstzrange('2013-01-01','2013-01-01 00:10:00')\n>\n> > ... will NOT use the index a_ts.\n>\n> Well, no. <@ is not a btree-indexable operator.\n>\n> What I find more disturbing is that this is what I get from the example\n> in HEAD:\n>\n> regression=# explain SELECT * FROM a WHERE ts <@\n> tstzrange('2013-01-01','2013-01-01 00:10:00');\n> ERROR: XX000: type 1184 is not a range type\n> LOCATION: range_get_typcache, rangetypes.c:1451\n>\n> Haven't traced through it to determine exactly what's happening, but\n> isn't this a legitimate usage? And if it isn't, surely a more\n> user-facing error ought to be getting thrown somewhere upstream of here.\n>\n> regards, tom lane\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIt is a legit usage, this is from a test i did myself (9.2.3)\n\ntest=# explain SELECT * FROM a WHERE ts <@\ntstzrange('2013-01-01','2013-04-01 00:10:00');\n QUERY PLAN\n------------------------------------------------------------------------------------\n Seq Scan on a (cost=0.00..23.75 rows=1 width=44)\n Filter: (ts <@ '[\"2013-01-01 00:00:00+02\",\"2013-04-01\n00:10:00+03\")'::tstzrange)\n\nOn Thu, Mar 21, 2013 at 5:58 AM, Tom Lane <[email protected]> wrote:\nJosh Berkus <[email protected]> writes:\n> I just noticed that if I use a tstzrange for convenience, a standard\n> btree index on a timestamp won't get used for it. Example:\n\n> table a (\n> id int,\n> val text,\n> ts timestamptz\n> );\n> index a_ts on a(ts);\n\n> SELECT * FROM a WHERE ts <@ tstzrange('2013-01-01','2013-01-01 00:10:00')\n\n> ... will NOT use the index a_ts.\n\nWell, no. <@ is not a btree-indexable operator.\n\nWhat I find more disturbing is that this is what I get from the example\nin HEAD:\n\nregression=# explain SELECT * FROM a WHERE ts <@ tstzrange('2013-01-01','2013-01-01 00:10:00');\nERROR: XX000: type 1184 is not a range type\nLOCATION: range_get_typcache, rangetypes.c:1451\n\nHaven't traced through it to determine exactly what's happening, but\nisn't this a legitimate usage? And if it isn't, surely a more\nuser-facing error ought to be getting thrown somewhere upstream of here.\n\n regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nIt is a legit usage, this is from a test i did myself (9.2.3)test=# explain SELECT * FROM a WHERE ts <@ tstzrange('2013-01-01','2013-04-01 00:10:00'); QUERY PLAN\n------------------------------------------------------------------------------------ Seq Scan on a (cost=0.00..23.75 rows=1 width=44) Filter: (ts <@ '[\"2013-01-01 00:00:00+02\",\"2013-04-01 00:10:00+03\")'::tstzrange)",
"msg_date": "Thu, 21 Mar 2013 06:07:25 +0200",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index usage for tstzrange?"
},
{
"msg_contents": "On 21.03.2013 06:07, Vasilis Ventirozos wrote:\n> On Thu, Mar 21, 2013 at 5:58 AM, Tom Lane<[email protected]> wrote:\n>> What I find more disturbing is that this is what I get from the example\n>> in HEAD:\n>>\n>> regression=# explain SELECT * FROM a WHERE ts<@\n>> tstzrange('2013-01-01','2013-01-01 00:10:00');\n>> ERROR: XX000: type 1184 is not a range type\n>> LOCATION: range_get_typcache, rangetypes.c:1451\n>>\n>> Haven't traced through it to determine exactly what's happening, but\n>> isn't this a legitimate usage? And if it isn't, surely a more\n>> user-facing error ought to be getting thrown somewhere upstream of here.\n>\n> It is a legit usage, this is from a test i did myself (9.2.3)\n>\n> test=# explain SELECT * FROM a WHERE ts<@\n> tstzrange('2013-01-01','2013-04-01 00:10:00');\n> QUERY PLAN\n> ------------------------------------------------------------------------------------\n> Seq Scan on a (cost=0.00..23.75 rows=1 width=44)\n> Filter: (ts<@ '[\"2013-01-01 00:00:00+02\",\"2013-04-01\n> 00:10:00+03\")'::tstzrange)\n\nLooks like the range type cost estimation patch broke this, back in \nAugust already. The case of var <@ constant, where constant is a range \nand var is an element, that's broken. The cost estimation function, \nrangesel(), incorrectly assumes that the 'var' is always a range type.\n\nIt's a bit worrying that no-one noticed until now. I'll add a test for \nthat operator to the rangetypes regression test.\n\nThe immediate fix is attached, but this made me realize that rangesel() \nis still missing estimation for the \"element <@ range\" operator. It \nshouldn't be hard to implement, I'm pretty sure we have all the \nstatistics we need for that.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 21 Mar 2013 10:52:42 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index usage for tstzrange?"
},
{
"msg_contents": "\n> Well, no. <@ is not a btree-indexable operator.\n\nYes, but it's equivalent to ( ( a >= b1 or b1 is null ) and ( a < b2 or\nb2 is null ) ), which *is* btree-indexable and can use an index. So it\nseems like the kind of optimization we could eventually make.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 21 Mar 2013 17:05:31 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index usage for tstzrange?"
},
{
"msg_contents": "On 21.03.2013 17:55, Alexander Korotkov wrote:\n> On Thu, Mar 21, 2013 at 12:52 PM, Heikki Linnakangas<\n>> The immediate fix is attached, but this made me realize that rangesel() is\n>> still missing estimation for the \"element<@ range\" operator. It shouldn't\n>> be hard to implement, I'm pretty sure we have all the statistics we need\n>> for that.\n>\n> Probably we could even call existing scalarltsel and scalargtsel for this\n> case.\n\nI came up with the attached. I didn't quite use scalarltsel, but I used \nthe scalarineqsel function, which contains the \"guts\" of scalarltsel and \nscalargtsel.\n\nOne thing I wasn't quite sure of (from the patch):\n\n> \t/*\n> \t * We use the data type's default < operator. This is bogus, if the range\n> \t * type's rngsubopc operator class is different. In practice, that ought\n> \t * to be rare. It would also be bogus to use the < operator from the\n> \t * rngsubopc operator class, because the statistics are collected using\n> \t * using the default operator class, anyway.\n> \t *\n> \t * For the same reason, use the default collation. The statistics are\n> \t * collected with the default collation.\n> \t */\n\nDoes that make sense? The other option would be to use the < operator \nfrom the rngsubopc op class, even though the scalar statistics are \ncollected with the default b-tree < operator. As long as the two sort \nroughly the same way, you get reasonable results either way. Yet another \noption would be to use histogram_selectivity() instead of \nineq_histogram_selectivity(), if the range's rngsubopc opclass isn't the \ntype's default opclass. histogram_selectivity() works with any operator \nregardless of the sort ordering, basically using the histogram values \nmerely as a sample, rather than as a histogram. But I'm reluctant to \nmake this any more complicated, as using a non-default opclass for the \nrange type is rare.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 22 Mar 2013 23:53:29 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index usage for tstzrange?"
},
{
"msg_contents": "On 22.03.2013 02:05, Josh Berkus wrote:\n>> Well, no.<@ is not a btree-indexable operator.\n>\n> Yes, but it's equivalent to ( ( a>= b1 or b1 is null ) and ( a< b2 or\n> b2 is null ) ), which *is* btree-indexable and can use an index. So it\n> seems like the kind of optimization we could eventually make.\n\nYeah. The sort order of <@ is the same as regular b-tree, so it should \nbe possible. In fact, nothing stops you from creating the suitable \noperator and b-tree support functions. See attached patch for int4, but \nthe same should work for timestamptz.\n\nWe should do this automatically. Or am I missing something?\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 23 Mar 2013 00:53:14 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index usage for tstzrange?"
},
{
"msg_contents": "\n> We should do this automatically. Or am I missing something?\n\nAside from the need to support @> as well, not that I can see.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Mar 2013 18:23:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index usage for tstzrange?"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> We should do this automatically. Or am I missing something?\n\nYes. This is not equality.\n\n> ALTER OPERATOR FAMILY integer_ops USING btree ADD\n> OPERATOR 3 <@ (int4, int4range),\n> FUNCTION 1 btint4rangecmp(int4, int4range);\n\nThat will break approximately everything in sight, starting with the\nplanner's opinion of what equality is. There is *way* too much stuff\nthat knows the semantics of btree opclasses for us to start jamming\nrandom operators into them, even if this seemed to work in trivial\ntesting. (See the last section of src/backend/access/nbtree/README\nto just scratch the surface of the assumptions this breaks.)\n\nIt's possible that for constant ranges we could have the planner expand\n\"intcol <@ 'x,y'::int4range\" into \"intcol between x and y\", using\nsomething similar to the index LIKE optimization (ie, the \"special\noperator\" stuff in indxpath.c). I'd like to find a way to make that\ntype of optimization pluggable, though --- the existing approach of\nhard-wiring knowledge into indxpath.c has never been anything but\na kluge, and it definitely doesn't scale as-is to anything except\nbuilt-in types and operators.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 23 Mar 2013 00:31:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index usage for tstzrange?"
}
] |
[
{
"msg_contents": "I have two tables in Postgres 9.2 on a Linux server with 8GB of RAM. The\nfirst table has 60 million records:\n\nCREATE TABLE table1\n(\n id integer,\n update date,\n company character(35),\n address character(35),\n city character(20),\n state character(2),\n zip character(9),\n phone character(10),\n fips character(5),\n tract character(6),\n block character(4),\n status character(1),\n pre_title character(2),\n contact character(35),\n title character(20),\n pstat character(1),\n id integer NOT NULL,\n pkone character(2),\n pktwo character(2),\n pkthree character(2),\n pkfour character(2),\n centract character(15),\n CONSTRAINT table1_pkey PRIMARY KEY (id ),\n CONSTRAINT fipsc FOREIGN KEY (fips)\n REFERENCES fips (fips) MATCH FULL\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT statec FOREIGN KEY (state)\n REFERENCES state (state) MATCH FULL\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT tractc FOREIGN KEY (centract)\n REFERENCES tract (centract) MATCH FULL\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT zipc FOREIGN KEY (zip)\n REFERENCES zip (zip) MATCH FULL\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE table1\n OWNER TO postgres;\n\n\n-- Index: statidx2\n\n-- DROP INDEX statidx2;\n\nCREATE INDEX statidx2\n ON table1\n USING btree\n (state COLLATE pg_catalog.\"default\" );\n\nThe second table just has the 51 state records:\n\nCREATE TABLE state\n(\n state character(2) NOT NULL,\n state_name character(15),\n CONSTRAINT state_pkey PRIMARY KEY (state )\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE state\n OWNER TO postgres;\n\n-- Index: stateidx\n\n-- DROP INDEX stateidx;\n\nCREATE UNIQUE INDEX stateidx\n ON state\n USING btree\n (state COLLATE pg_catalog.\"default\" );\n\nWhen I run this query:\n\n select state.state, count(table1.id) from state,table1 where table1.state\n= state.state group by state.state\n\nIt takes almost 4 minutes with this output from explain:\n\n\"HashAggregate (cost=7416975.58..7416976.09 rows=51 width=7) (actual\ntime=284891.955..284891.964 rows=51 loops=1)\"\n\" -> Hash Join (cost=2.15..7139961.94 rows=55402728 width=7) (actual\ntime=0.049..269049.678 rows=60057057 loops=1)\"\n\" Hash Cond: (busbase.state = state.state)\"\n\" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\nwidth=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\n\" -> Hash (cost=1.51..1.51 rows=51 width=3) (actual\ntime=0.032..0.032 rows=51 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n\" -> Seq Scan on state (cost=0.00..1.51 rows=51 width=3)\n(actual time=0.003..0.012 rows=51 loops=1)\"\n\"Total runtime: 284892.024 ms\"\n\nI've tried playing around with the settings in the config file for\nshared_buffers, work_mem, etc restarting Postgres each time and nothing\nseems to help.\n\nThanks for any help.\n\nI have two tables in Postgres 9.2 on a Linux server with 8GB of RAM. The first table has 60 million records:CREATE TABLE table1( id integer, update date,\n company character(35), address character(35), city character(20), state character(2), zip character(9), phone character(10), fips character(5),\n tract character(6), block character(4), status character(1), pre_title character(2), contact character(35), title character(20), pstat character(1),\n id integer NOT NULL, pkone character(2), pktwo character(2), pkthree character(2), pkfour character(2), centract character(15), CONSTRAINT table1_pkey PRIMARY KEY (id ),\n CONSTRAINT fipsc FOREIGN KEY (fips) REFERENCES fips (fips) MATCH FULL ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT statec FOREIGN KEY (state) REFERENCES state (state) MATCH FULL\n ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT tractc FOREIGN KEY (centract) REFERENCES tract (centract) MATCH FULL ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT zipc FOREIGN KEY (zip) REFERENCES zip (zip) MATCH FULL ON UPDATE NO ACTION ON DELETE NO ACTION)WITH ( OIDS=FALSE);\nALTER TABLE table1 OWNER TO postgres;-- Index: statidx2-- DROP INDEX statidx2;CREATE INDEX statidx2 ON table1\n USING btree (state COLLATE pg_catalog.\"default\" );The second table just has the 51 state records:CREATE TABLE state(\n state character(2) NOT NULL, state_name character(15), CONSTRAINT state_pkey PRIMARY KEY (state ))WITH ( OIDS=FALSE);ALTER TABLE state\n OWNER TO postgres;-- Index: stateidx-- DROP INDEX stateidx;CREATE UNIQUE INDEX stateidx ON state USING btree\n (state COLLATE pg_catalog.\"default\" );When I run this query: select state.state, count(table1.id) from state,table1 where table1.state = state.state group by state.state\nIt takes almost 4 minutes with this output from explain:\"HashAggregate (cost=7416975.58..7416976.09 rows=51 width=7) (actual time=284891.955..284891.964 rows=51 loops=1)\"\n\" -> Hash Join (cost=2.15..7139961.94 rows=55402728 width=7) (actual time=0.049..269049.678 rows=60057057 loops=1)\"\" Hash Cond: (busbase.state = state.state)\"\" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728 width=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\n\" -> Hash (cost=1.51..1.51 rows=51 width=3) (actual time=0.032..0.032 rows=51 loops=1)\"\" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\" -> Seq Scan on state (cost=0.00..1.51 rows=51 width=3) (actual time=0.003..0.012 rows=51 loops=1)\"\n\"Total runtime: 284892.024 ms\"I've tried playing around with the settings in the config file for shared_buffers, work_mem, etc restarting Postgres each time and nothing seems to help.\nThanks for any help.",
"msg_date": "Fri, 22 Mar 2013 15:46:00 -0400",
"msg_from": "Cindy Makarowsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of query"
},
{
"msg_contents": "On 03/22/2013 12:46 PM, Cindy Makarowsky wrote:\n> I've tried playing around with the settings in the config file for\n> shared_buffers, work_mem, etc restarting Postgres each time and nothing\n> seems to help.\n\nWell, you're summarizing 55 million rows on an unindexed table:\n\n\" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\nwidth=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\n\n... that's where your time is going.\n\nMy only suggestion would be to create a composite index which matches\nthe group by condition on table1, and vacuum freeze the whole table so\nthat you can use index-only scan on 9.2.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 22 Mar 2013 14:13:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of query"
},
{
"msg_contents": "But, I do have an index on Table1 on the state field which is in my group\nby condition:\n\nCREATE INDEX statidx2\n ON table1\n USING btree\n (state COLLATE pg_catalog.\"default\" );\n\nI have vacuumed the table too.\nOn Fri, Mar 22, 2013 at 5:13 PM, Josh Berkus <[email protected]> wrote:\n\n> On 03/22/2013 12:46 PM, Cindy Makarowsky wrote:\n> > I've tried playing around with the settings in the config file for\n> > shared_buffers, work_mem, etc restarting Postgres each time and nothing\n> > seems to help.\n>\n> Well, you're summarizing 55 million rows on an unindexed table:\n>\n> \" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\n> width=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\n>\n> ... that's where your time is going.\n>\n> My only suggestion would be to create a composite index which matches\n> the group by condition on table1, and vacuum freeze the whole table so\n> that you can use index-only scan on 9.2.\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nBut, I do have an index on Table1 on the state field which is in my group by condition:CREATE INDEX statidx2\n ON table1 USING btree\n (state COLLATE pg_catalog.\"default\" );I have vacuumed the table too.On Fri, Mar 22, 2013 at 5:13 PM, Josh Berkus <[email protected]> wrote:\nOn 03/22/2013 12:46 PM, Cindy Makarowsky wrote:\n> I've tried playing around with the settings in the config file for\n> shared_buffers, work_mem, etc restarting Postgres each time and nothing\n> seems to help.\n\nWell, you're summarizing 55 million rows on an unindexed table:\n\n\" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\nwidth=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\n\n... that's where your time is going.\n\nMy only suggestion would be to create a composite index which matches\nthe group by condition on table1, and vacuum freeze the whole table so\nthat you can use index-only scan on 9.2.\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 22 Mar 2013 18:20:15 -0400",
"msg_from": "Cindy Makarowsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of query"
},
{
"msg_contents": "Hi,\n\nthere is something mixed..\n\nyour index is on table1....\n\nExplain Analyze reports about table called: busbase....\n\nKind Regards,\n\nMisa\n\n\n\n\n2013/3/22 Cindy Makarowsky <[email protected]>\n\n> But, I do have an index on Table1 on the state field which is in my group\n> by condition:\n>\n> CREATE INDEX statidx2\n> ON table1\n> USING btree\n> (state COLLATE pg_catalog.\"default\" );\n>\n> I have vacuumed the table too.\n>\n> On Fri, Mar 22, 2013 at 5:13 PM, Josh Berkus <[email protected]> wrote:\n>\n>> On 03/22/2013 12:46 PM, Cindy Makarowsky wrote:\n>> > I've tried playing around with the settings in the config file for\n>> > shared_buffers, work_mem, etc restarting Postgres each time and nothing\n>> > seems to help.\n>>\n>> Well, you're summarizing 55 million rows on an unindexed table:\n>>\n>> \" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\n>> width=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\n>>\n>> ... that's where your time is going.\n>>\n>> My only suggestion would be to create a composite index which matches\n>> the group by condition on table1, and vacuum freeze the whole table so\n>> that you can use index-only scan on 9.2.\n>>\n>> --\n>> Josh Berkus\n>> PostgreSQL Experts Inc.\n>> http://pgexperts.com\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n\nHi,there is something mixed..your index is on table1....Explain Analyze reports about table called: busbase....\nKind Regards,Misa2013/3/22 Cindy Makarowsky <[email protected]>\nBut, I do have an index on Table1 on the state field which is in my group by condition:\nCREATE INDEX statidx2\n ON table1 USING btree\n (state COLLATE pg_catalog.\"default\" );I have vacuumed the table too.On Fri, Mar 22, 2013 at 5:13 PM, Josh Berkus <[email protected]> wrote:\nOn 03/22/2013 12:46 PM, Cindy Makarowsky wrote:\n> I've tried playing around with the settings in the config file for\n> shared_buffers, work_mem, etc restarting Postgres each time and nothing\n> seems to help.\n\nWell, you're summarizing 55 million rows on an unindexed table:\n\n\" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\nwidth=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\n\n... that's where your time is going.\n\nMy only suggestion would be to create a composite index which matches\nthe group by condition on table1, and vacuum freeze the whole table so\nthat you can use index-only scan on 9.2.\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 22 Mar 2013 23:25:43 +0100",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of query"
},
{
"msg_contents": "I changed the name of the table for the post but forgot to change it in the\nresults of the explain. Table1 is busbase.\n\nOn Fri, Mar 22, 2013 at 6:25 PM, Misa Simic <[email protected]> wrote:\n\n> Hi,\n>\n> there is something mixed..\n>\n> your index is on table1....\n>\n> Explain Analyze reports about table called: busbase....\n>\n> Kind Regards,\n>\n> Misa\n>\n>\n>\n>\n> 2013/3/22 Cindy Makarowsky <[email protected]>\n>\n>> But, I do have an index on Table1 on the state field which is in my group\n>> by condition:\n>>\n>> CREATE INDEX statidx2\n>> ON table1\n>> USING btree\n>> (state COLLATE pg_catalog.\"default\" );\n>>\n>> I have vacuumed the table too.\n>>\n>> On Fri, Mar 22, 2013 at 5:13 PM, Josh Berkus <[email protected]> wrote:\n>>\n>>> On 03/22/2013 12:46 PM, Cindy Makarowsky wrote:\n>>> > I've tried playing around with the settings in the config file for\n>>> > shared_buffers, work_mem, etc restarting Postgres each time and nothing\n>>> > seems to help.\n>>>\n>>> Well, you're summarizing 55 million rows on an unindexed table:\n>>>\n>>> \" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\n>>> width=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\n>>>\n>>> ... that's where your time is going.\n>>>\n>>> My only suggestion would be to create a composite index which matches\n>>> the group by condition on table1, and vacuum freeze the whole table so\n>>> that you can use index-only scan on 9.2.\n>>>\n>>> --\n>>> Josh Berkus\n>>> PostgreSQL Experts Inc.\n>>> http://pgexperts.com\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list (\n>>> [email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>>\n>\n\nI changed the name of the table for the post but forgot to change it in the results of the explain. Table1 is busbase.On Fri, Mar 22, 2013 at 6:25 PM, Misa Simic <[email protected]> wrote:\nHi,there is something mixed..your index is on table1....\nExplain Analyze reports about table called: busbase....\nKind Regards,Misa2013/3/22 Cindy Makarowsky <[email protected]>\nBut, I do have an index on Table1 on the state field which is in my group by condition:\nCREATE INDEX statidx2\n ON table1 USING btree\n (state COLLATE pg_catalog.\"default\" );I have vacuumed the table too.On Fri, Mar 22, 2013 at 5:13 PM, Josh Berkus <[email protected]> wrote:\nOn 03/22/2013 12:46 PM, Cindy Makarowsky wrote:\n> I've tried playing around with the settings in the config file for\n> shared_buffers, work_mem, etc restarting Postgres each time and nothing\n> seems to help.\n\nWell, you're summarizing 55 million rows on an unindexed table:\n\n\" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\nwidth=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\n\n... that's where your time is going.\n\nMy only suggestion would be to create a composite index which matches\nthe group by condition on table1, and vacuum freeze the whole table so\nthat you can use index-only scan on 9.2.\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 22 Mar 2013 18:26:36 -0400",
"msg_from": "Cindy Makarowsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of query"
},
{
"msg_contents": "On Friday, March 22, 2013, Cindy Makarowsky wrote:\n\n> I have two tables in Postgres 9.2 on a Linux server with 8GB of RAM. The\n> first table has 60 million records:\n\n\nYou have over 40GB of data in that table, so there is no way you are going\nto get it into 8GB RAM without some major reorganization.\n\n\n> company character(35),\n> address character(35),\n> city character(20),\n> contact character(35),\n> title character(20),\n>\n\n\nAll of those fixed width fields are probably taking up needless space, and\nin your case, space is time. Varchar would probably be better. (And\nprobably longer maximum lengths as well. Most people don't need more than\n35 characters for their addresses, but the people who do are going to be\ncheesed off when you inform them that you deem their address to be\nunreasonable. Unless your mailing labels only hold 35 characters)\n\n\n\n> When I run this query:\n>\n> select state.state, count(table1.id) from state,table1 where\n> table1.state = state.state group by state.state\n>\n\n\nThe join to the \"state\" table is not necessary. Between the foreign key\nand the primary key, you know that every state exists, and that every state\nexists only once. But, that will not solve your problem, as the join to\nthe state table is not where the time goes.\n\n\n\n> \" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\n> width=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\n>\n\nAssuming that your cost parameters are all default, this means you have\n(6378172.28 - 0.01* 55402728)/1 = 5.8e6 pages, or 44.4 GB of table. That\nis, less than 10 tuples per page.\n\nTightly packed, you should be able to hold over 30 tuples per page. You\nare probably not vacuuming aggressively enough, or you were not doing so in\nthe past and never did a \"vacuum full\" to reclaim the bloated space.\n\nIn any event, your sequential scan is running at 181 MB/s. Is this what\nyou would expect given your IO hardware?\n\n\n\n>\n> I've tried playing around with the settings in the config file for\n> shared_buffers, work_mem, etc restarting Postgres each time and nothing\n> seems to help.\n>\n\nHow fast do you think it should run? How fast do you need it to run? This\nseems like the type of query that would get run once per financial quarter,\nor maybe once per day on off-peak times.\n\nCheers,\n\nJeff\n\nOn Friday, March 22, 2013, Cindy Makarowsky wrote:I have two tables in Postgres 9.2 on a Linux server with 8GB of RAM. The first table has 60 million records:\nYou have over 40GB of data in that table, so there is no way you are going to get it into 8GB RAM without some major reorganization. \n company character(35), address character(35), city character(20), contact character(35), title character(20),\nAll of those fixed width fields are probably taking up needless space, and in your case, space is time. Varchar would probably be better. (And probably longer maximum lengths as well. Most people don't need more than 35 characters for their addresses, but the people who do are going to be cheesed off when you inform them that you deem their address to be unreasonable. Unless your mailing labels only hold 35 characters)\n When I run this query: select state.state, count(table1.id) from state,table1 where table1.state = state.state group by state.state\nThe join to the \"state\" table is not necessary. Between the foreign key and the primary key, you know that every state exists, and that every state exists only once. But, that will not solve your problem, as the join to the state table is not where the time goes.\n \" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728 width=7) (actual time=0.004..250046.673 rows=60057057 loops=1)\"\nAssuming that your cost parameters are all default, this means you have (6378172.28 - 0.01* 55402728)/1 = 5.8e6 pages, or 44.4 GB of table. That is, less than 10 tuples per page. \nTightly packed, you should be able to hold over 30 tuples per page. You are probably not vacuuming aggressively enough, or you were not doing so in the past and never did a \"vacuum full\" to reclaim the bloated space.\nIn any event, your sequential scan is running at 181 MB/s. Is this what you would expect given your IO hardware? \nI've tried playing around with the settings in the config file for shared_buffers, work_mem, etc restarting Postgres each time and nothing seems to help.\nHow fast do you think it should run? How fast do you need it to run? This seems like the type of query that would get run once per financial quarter, or maybe once per day on off-peak times. \nCheers,Jeff",
"msg_date": "Sat, 23 Mar 2013 14:53:20 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of query"
},
{
"msg_contents": "Hi Jeff,\n\nIt seems my previous mail has not showed up in the list... copied/pasted\nagain belloew\n\nHowever, you said something important:\n\n\"The join to the \"state\" table is not necessary. Between the foreign key\nand the primary key, you know that every state exists, and that every state\nexists only once. But, that will not solve your problem, as the join to\nthe state table is not where the time goes.\"\n\nI think it is something what planner could/should be \"aware off\"... and\ndiscard the join\n\n\" Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual\ntime=38.424..41992.070 rows=60057057 loops=1)\"\n\" Merge Cond: (state.state = busbase.state)\"\n\nthis part from bellow plan would save significant time if planner didn't\ndecide to take this step at all ....\n\nKind regards,\n\nMisa\n\n\n\n\n\"\nHi Cindy\n\nTBH - I don't know...\n\nI have added this to list so maybe someone else can help...\n\nTo recap:\n\nfrom start situation (table structure and indexes are in the first mail in\nthis thread)\n\nEXPLAIN ANALYZE\nSELECT busbase.state AS c0, count(busbase.id) AS m0 FROM busbase INNER JOIN\nstate USING (state)\nGROUP BY busbase.state\n\nsays:\n\"HashAggregate (cost=7416975.58..7416976.09 rows=51 width=7) (actual\ntime=285339.465..285339.473 rows=51 loops=1)\"\n\" -> Hash Join (cost=2.15..7139961.94 rows=55402728 width=7) (actual\ntime=0.066..269527.934 rows=60057057 loops=1)\"\n\" Hash Cond: (busbase.state = state.state)\"\n\" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\nwidth=7) (actual time=0.022..251029.307 rows=60057057 loops=1)\"\n\" -> Hash (cost=1.51..1.51 rows=51 width=3) (actual\ntime=0.028..0.028 rows=51 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n\" -> Seq Scan on state (cost=0.00..1.51 rows=51 width=3)\n(actual time=0.003..0.019 rows=51 loops=1)\"\n\"Total runtime: 285339.516 ms\"\n\non created composite index\nCREATE INDEX comp_statidx2\n ON busbase\n USING btree\n (state, id );\n\n\nwe got:\n\n\"GroupAggregate (cost=0.00..2610570.81 rows=51 width=3) (actual\ntime=98.923..51033.888 rows=51 loops=1)\"\n\" -> Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual\ntime=38.424..41992.070 rows=60057057 loops=1)\"\n\" Merge Cond: (state.state = busbase.state)\"\n\" -> Index Only Scan using state_pkey on state (cost=0.00..13.02\nrows=51 width=3) (actual time=0.008..0.148 rows=51 loops=1)\"\n\" Heap Fetches: 51\"\n\" -> Index Only Scan using comp_statidx2 on busbase\n (cost=0.00..1559558.68 rows=60057056 width=3) (actual\ntime=38.408..12883.575 rows=60057057 loops=1)\"\n\" Heap Fetches: 0\"\n\"Total runtime: 51045.648 ms\"\n\n\nQuestion is - is it possible to improve it more?\n\"\n\n\nHi Jeff,It seems my previous mail has not showed up in the list... copied/pasted again belloewHowever, you said something important:\n\"The join to the \"state\" table is not necessary. Between the foreign key and the primary key, you know that every state exists, and that every state exists only once. But, that will not solve your problem, as the join to the state table is not where the time goes.\"\nI think it is something what planner could/should be \"aware off\"... and discard the join \n\" Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual time=38.424..41992.070 rows=60057057 loops=1)\"\n\" Merge Cond: (state.state = busbase.state)\"\nthis part from bellow plan would save significant time if planner didn't decide to take this step at all ....\nKind regards,\nMisa\"Hi CindyTBH - I don't know...\nI have added this to list so maybe someone else can help...To recap:from start situation (table structure and indexes are in the first mail in this thread)\nEXPLAIN ANALYZESELECT busbase.state AS c0, count(busbase.id) AS m0 FROM busbase INNER JOIN state USING (state)\nGROUP BY busbase.statesays:\n\"HashAggregate (cost=7416975.58..7416976.09 rows=51 width=7) (actual time=285339.465..285339.473 rows=51 loops=1)\"\" -> Hash Join (cost=2.15..7139961.94 rows=55402728 width=7) (actual time=0.066..269527.934 rows=60057057 loops=1)\"\n\" Hash Cond: (busbase.state = state.state)\"\" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728 width=7) (actual time=0.022..251029.307 rows=60057057 loops=1)\"\n\" -> Hash (cost=1.51..1.51 rows=51 width=3) (actual time=0.028..0.028 rows=51 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\" -> Seq Scan on state (cost=0.00..1.51 rows=51 width=3) (actual time=0.003..0.019 rows=51 loops=1)\"\n\"Total runtime: 285339.516 ms\"\non created composite index CREATE INDEX comp_statidx2\n ON busbase USING btree (state, id );\nwe got:\n\"GroupAggregate (cost=0.00..2610570.81 rows=51 width=3) (actual time=98.923..51033.888 rows=51 loops=1)\"\n\" -> Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual time=38.424..41992.070 rows=60057057 loops=1)\"\" Merge Cond: (state.state = busbase.state)\"\n\" -> Index Only Scan using state_pkey on state (cost=0.00..13.02 rows=51 width=3) (actual time=0.008..0.148 rows=51 loops=1)\"\" Heap Fetches: 51\"\n\" -> Index Only Scan using comp_statidx2 on busbase (cost=0.00..1559558.68 rows=60057056 width=3) (actual time=38.408..12883.575 rows=60057057 loops=1)\"\n\" Heap Fetches: 0\"\"Total runtime: 51045.648 ms\"Question is - is it possible to improve it more?\n\"",
"msg_date": "Sat, 23 Mar 2013 23:27:36 +0100",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of query"
},
{
"msg_contents": "I assume there are reasons not to throw away join to state. May be it still\ncan be done as the last thing. This should help further:\nSELECT counts.* FROM (\n SELECT busbase.state AS state, count(busbase.id) AS m0 FROM busbase\n GROUP BY busbase.state ) AS counts\n INNER JOIN state USING (state)\n\nRegards,\nRoman Konoval\n\n\nOn Sun, Mar 24, 2013 at 12:27 AM, Misa Simic <[email protected]> wrote:\n\n> Hi Jeff,\n>\n> It seems my previous mail has not showed up in the list... copied/pasted\n> again belloew\n>\n> However, you said something important:\n>\n> \"The join to the \"state\" table is not necessary. Between the foreign key\n> and the primary key, you know that every state exists, and that every state\n> exists only once. But, that will not solve your problem, as the join to\n> the state table is not where the time goes.\"\n>\n> I think it is something what planner could/should be \"aware off\"... and\n> discard the join\n>\n> \" Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual\n> time=38.424..41992.070 rows=60057057 loops=1)\"\n> \" Merge Cond: (state.state = busbase.state)\"\n>\n> this part from bellow plan would save significant time if planner didn't\n> decide to take this step at all ....\n>\n> Kind regards,\n>\n> Misa\n>\n>\n>\n>\n> \"\n> Hi Cindy\n>\n> TBH - I don't know...\n>\n> I have added this to list so maybe someone else can help...\n>\n> To recap:\n>\n> from start situation (table structure and indexes are in the first mail in\n> this thread)\n>\n> EXPLAIN ANALYZE\n> SELECT busbase.state AS c0, count(busbase.id) AS m0 FROM busbase INNER\n> JOIN state USING (state)\n> GROUP BY busbase.state\n>\n> says:\n> \"HashAggregate (cost=7416975.58..7416976.09 rows=51 width=7) (actual\n> time=285339.465..285339.473 rows=51 loops=1)\"\n> \" -> Hash Join (cost=2.15..7139961.94 rows=55402728 width=7) (actual\n> time=0.066..269527.934 rows=60057057 loops=1)\"\n> \" Hash Cond: (busbase.state = state.state)\"\n> \" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728\n> width=7) (actual time=0.022..251029.307 rows=60057057 loops=1)\"\n> \" -> Hash (cost=1.51..1.51 rows=51 width=3) (actual\n> time=0.028..0.028 rows=51 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\n> \" -> Seq Scan on state (cost=0.00..1.51 rows=51 width=3)\n> (actual time=0.003..0.019 rows=51 loops=1)\"\n> \"Total runtime: 285339.516 ms\"\n>\n> on created composite index\n> CREATE INDEX comp_statidx2\n> ON busbase\n> USING btree\n> (state, id );\n>\n>\n> we got:\n>\n> \"GroupAggregate (cost=0.00..2610570.81 rows=51 width=3) (actual\n> time=98.923..51033.888 rows=51 loops=1)\"\n> \" -> Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual\n> time=38.424..41992.070 rows=60057057 loops=1)\"\n> \" Merge Cond: (state.state = busbase.state)\"\n> \" -> Index Only Scan using state_pkey on state (cost=0.00..13.02\n> rows=51 width=3) (actual time=0.008..0.148 rows=51 loops=1)\"\n> \" Heap Fetches: 51\"\n> \" -> Index Only Scan using comp_statidx2 on busbase\n> (cost=0.00..1559558.68 rows=60057056 width=3) (actual\n> time=38.408..12883.575 rows=60057057 loops=1)\"\n> \" Heap Fetches: 0\"\n> \"Total runtime: 51045.648 ms\"\n>\n>\n> Question is - is it possible to improve it more?\n> \"\n> \n>\n\nI assume there are reasons not to throw away join to state. May be it still can be done as the last thing. This should help further:SELECT counts.* FROM (\n\n SELECT busbase.state AS state, count(busbase.id) AS m0 FROM busbase GROUP BY busbase.state ) AS counts\n INNER JOIN state USING (state)\nRegards,\nRoman Konoval On Sun, Mar 24, 2013 at 12:27 AM, Misa Simic <[email protected]> wrote:\nHi Jeff,It seems my previous mail has not showed up in the list... copied/pasted again belloew\nHowever, you said something important:\n\"The join to the \"state\" table is not necessary. Between the foreign key and the primary key, you know that every state exists, and that every state exists only once. But, that will not solve your problem, as the join to the state table is not where the time goes.\"\nI think it is something what planner could/should be \"aware off\"... and discard the join \n\" Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual time=38.424..41992.070 rows=60057057 loops=1)\"\n\" Merge Cond: (state.state = busbase.state)\"\nthis part from bellow plan would save significant time if planner didn't decide to take this step at all ....\nKind regards,\n\n\nMisa\"Hi CindyTBH - I don't know...\nI have added this to list so maybe someone else can help...To recap:from start situation (table structure and indexes are in the first mail in this thread)\nEXPLAIN ANALYZESELECT busbase.state AS c0, count(busbase.id) AS m0 FROM busbase INNER JOIN state USING (state)\nGROUP BY busbase.statesays:\n\n\n\"HashAggregate (cost=7416975.58..7416976.09 rows=51 width=7) (actual time=285339.465..285339.473 rows=51 loops=1)\"\" -> Hash Join (cost=2.15..7139961.94 rows=55402728 width=7) (actual time=0.066..269527.934 rows=60057057 loops=1)\"\n\n\" Hash Cond: (busbase.state = state.state)\"\" -> Seq Scan on busbase (cost=0.00..6378172.28 rows=55402728 width=7) (actual time=0.022..251029.307 rows=60057057 loops=1)\"\n\" -> Hash (cost=1.51..1.51 rows=51 width=3) (actual time=0.028..0.028 rows=51 loops=1)\"\n\n\n\" Buckets: 1024 Batches: 1 Memory Usage: 2kB\"\" -> Seq Scan on state (cost=0.00..1.51 rows=51 width=3) (actual time=0.003..0.019 rows=51 loops=1)\"\n\"Total runtime: 285339.516 ms\"\non created composite index CREATE INDEX comp_statidx2\n\n\n ON busbase USING btree (state, id );\nwe got:\n\"GroupAggregate (cost=0.00..2610570.81 rows=51 width=3) (actual time=98.923..51033.888 rows=51 loops=1)\"\n\" -> Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual time=38.424..41992.070 rows=60057057 loops=1)\"\" Merge Cond: (state.state = busbase.state)\"\n\" -> Index Only Scan using state_pkey on state (cost=0.00..13.02 rows=51 width=3) (actual time=0.008..0.148 rows=51 loops=1)\"\" Heap Fetches: 51\"\n\" -> Index Only Scan using comp_statidx2 on busbase (cost=0.00..1559558.68 rows=60057056 width=3) (actual time=38.408..12883.575 rows=60057057 loops=1)\"\n\n\n\" Heap Fetches: 0\"\"Total runtime: 51045.648 ms\"Question is - is it possible to improve it more?\n\n\"",
"msg_date": "Sun, 24 Mar 2013 08:45:00 +0200",
"msg_from": "Roman Konoval <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of query"
},
{
"msg_contents": "On Sat, Mar 23, 2013 at 3:27 PM, Misa Simic <[email protected]> wrote:\n\n> Hi Jeff,\n>\n> It seems my previous mail has not showed up in the list... copied/pasted\n> again belloew\n>\n> However, you said something important:\n>\n> \"The join to the \"state\" table is not necessary. Between the foreign key\n> and the primary key, you know that every state exists, and that every state\n> exists only once. But, that will not solve your problem, as the join to\n> the state table is not where the time goes.\"\n>\n> I think it is something what planner could/should be \"aware off\"... and\n> discard the join\n>\n\nI thought that this was on the To Do list (\nhttp://wiki.postgresql.org/wiki/Todo) but if it is, I can't find it.\n\nI think the main concern was that it might add substantial planning time to\nall queries, even ones that would not benefit from it. I don't know if\nthere is a way to address this concern, other then to implement it and see\nwhat happens.\n\n...\n\n>\n> EXPLAIN ANALYZE\n> SELECT busbase.state AS c0, count(busbase.id) AS m0 FROM busbase INNER\n> JOIN state USING (state)\n> GROUP BY busbase.state\n>\n\nIn the original email, the table definition listed \"id\" twice, once with a\nnot null constraint. If it is truly not null, then this count could be\nreplaced with count(1), in which case the original index on (state) would\nbe sufficient, the composite on (count, id) would not be necessary. (Yes,\nthis is another thing the planner could, in theory, recognize on your\nbehalf)\n\nBased on the use of column aliases which are less meaningful than the\noriginal column names were, I'm assuming that this is generated SQL that\nyou have no control over?\n\n\n> on created composite index\n> CREATE INDEX comp_statidx2\n> ON busbase\n> USING btree\n> (state, id );\n>\n\n\n\n\n>\n>\n> we got:\n>\n> \"GroupAggregate (cost=0.00..2610570.81 rows=51 width=3) (actual\n> time=98.923..51033.888 rows=51 loops=1)\"\n> \" -> Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual\n> time=38.424..41992.070 rows=60057057 loops=1)\"\n> \" Merge Cond: (state.state = busbase.state)\"\n> \" -> Index Only Scan using state_pkey on state (cost=0.00..13.02\n> rows=51 width=3) (actual time=0.008..0.148 rows=51 loops=1)\"\n> \" Heap Fetches: 51\"\n> \" -> Index Only Scan using comp_statidx2 on busbase\n> (cost=0.00..1559558.68 rows=60057056 width=3) (actual\n> time=38.408..12883.575 rows=60057057 loops=1)\"\n> \" Heap Fetches: 0\"\n> \"Total runtime: 51045.648 ms\"\n>\n>\nI don't understand why you are getting a merge join rather than a hash\njoin. Nor why there is such a big difference between the actual time of\nthe index only scan and of the merge join itself. I would think the two\nshould be about equal. Perhaps I just don't understand the semantics of\nreported actual time for merge joins.\n\nDuring normal operations, how much of busbase is going to be all_visible at\nany given time? If that table sees high turnover, this plan might not work\nwell on the production system.\n\n\nCheers,\n\nJeff\n\nOn Sat, Mar 23, 2013 at 3:27 PM, Misa Simic <[email protected]> wrote:\nHi Jeff,It seems my previous mail has not showed up in the list... copied/pasted again belloewHowever, you said something important:\n\"The join to the \"state\" table is not necessary. Between the foreign key and the primary key, you know that every state exists, and that every state exists only once. But, that will not solve your problem, as the join to the state table is not where the time goes.\"\nI think it is something what planner could/should be \"aware off\"... and discard the join \nI thought that this was on the To Do list (http://wiki.postgresql.org/wiki/Todo) but if it is, I can't find it.\nI think the main concern was that it might add substantial planning time to all queries, even ones that would not benefit from it. I don't know if there is a way to address this concern, other then to implement it and see what happens.\n...EXPLAIN ANALYZE\nSELECT busbase.state AS c0, count(busbase.id) AS m0 FROM busbase INNER JOIN state USING (state)\nGROUP BY busbase.stateIn the original email, the table definition listed \"id\" twice, once with a not null constraint. If it is truly not null, then this count could be replaced with count(1), in which case the original index on (state) would be sufficient, the composite on (count, id) would not be necessary. (Yes, this is another thing the planner could, in theory, recognize on your behalf)\nBased on the use of column aliases which are less meaningful than the original column names were, I'm assuming that this is generated SQL that you have no control over?\non created composite index \nCREATE INDEX comp_statidx2\n ON busbase USING btree (state, id );\n \nwe got:\n\"GroupAggregate (cost=0.00..2610570.81 rows=51 width=3) (actual time=98.923..51033.888 rows=51 loops=1)\"\n\" -> Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual time=38.424..41992.070 rows=60057057 loops=1)\"\" Merge Cond: (state.state = busbase.state)\"\n\" -> Index Only Scan using state_pkey on state (cost=0.00..13.02 rows=51 width=3) (actual time=0.008..0.148 rows=51 loops=1)\"\" Heap Fetches: 51\"\n\" -> Index Only Scan using comp_statidx2 on busbase (cost=0.00..1559558.68 rows=60057056 width=3) (actual time=38.408..12883.575 rows=60057057 loops=1)\"\n\n\" Heap Fetches: 0\"\"Total runtime: 51045.648 ms\"I don't understand why you are getting a merge join rather than a hash join. Nor why there is such a big difference between the actual time of the index only scan and of the merge join itself. I would think the two should be about equal. Perhaps I just don't understand the semantics of reported actual time for merge joins.\nDuring normal operations, how much of busbase is going to be all_visible at any given time? If that table sees high turnover, this plan might not work well on the production system.\n Cheers,Jeff",
"msg_date": "Mon, 25 Mar 2013 11:18:28 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of query"
},
{
"msg_contents": "I basically don't have any control over the generated select statement.\n I'm using Mondrian and that is the select statement that gets passed to\nPostgres. You're right that if you remove the count(id), the query is\nfaster but I can't do that since the select statement is being executed\nfrom Mondrian.\n\nOn Mon, Mar 25, 2013 at 2:18 PM, Jeff Janes <[email protected]> wrote:\n\n> On Sat, Mar 23, 2013 at 3:27 PM, Misa Simic <[email protected]> wrote:\n>\n>> Hi Jeff,\n>>\n>> It seems my previous mail has not showed up in the list... copied/pasted\n>> again belloew\n>>\n>> However, you said something important:\n>>\n>> \"The join to the \"state\" table is not necessary. Between the foreign\n>> key and the primary key, you know that every state exists, and that every\n>> state exists only once. But, that will not solve your problem, as the join\n>> to the state table is not where the time goes.\"\n>>\n>> I think it is something what planner could/should be \"aware off\"... and\n>> discard the join\n>>\n>\n> I thought that this was on the To Do list (\n> http://wiki.postgresql.org/wiki/Todo) but if it is, I can't find it.\n>\n> I think the main concern was that it might add substantial planning time\n> to all queries, even ones that would not benefit from it. I don't know if\n> there is a way to address this concern, other then to implement it and see\n> what happens.\n>\n> ...\n>\n>>\n>> EXPLAIN ANALYZE\n>> SELECT busbase.state AS c0, count(busbase.id) AS m0 FROM busbase INNER\n>> JOIN state USING (state)\n>> GROUP BY busbase.state\n>>\n>\n> In the original email, the table definition listed \"id\" twice, once with a\n> not null constraint. If it is truly not null, then this count could be\n> replaced with count(1), in which case the original index on (state) would\n> be sufficient, the composite on (count, id) would not be necessary. (Yes,\n> this is another thing the planner could, in theory, recognize on your\n> behalf)\n>\n> Based on the use of column aliases which are less meaningful than the\n> original column names were, I'm assuming that this is generated SQL that\n> you have no control over?\n>\n>\n>> on created composite index\n>> CREATE INDEX comp_statidx2\n>> ON busbase\n>> USING btree\n>> (state, id );\n>>\n>\n>\n>\n>\n>>\n>>\n>> we got:\n>>\n>> \"GroupAggregate (cost=0.00..2610570.81 rows=51 width=3) (actual\n>> time=98.923..51033.888 rows=51 loops=1)\"\n>> \" -> Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual\n>> time=38.424..41992.070 rows=60057057 loops=1)\"\n>> \" Merge Cond: (state.state = busbase.state)\"\n>> \" -> Index Only Scan using state_pkey on state (cost=0.00..13.02\n>> rows=51 width=3) (actual time=0.008..0.148 rows=51 loops=1)\"\n>> \" Heap Fetches: 51\"\n>> \" -> Index Only Scan using comp_statidx2 on busbase\n>> (cost=0.00..1559558.68 rows=60057056 width=3) (actual\n>> time=38.408..12883.575 rows=60057057 loops=1)\"\n>> \" Heap Fetches: 0\"\n>> \"Total runtime: 51045.648 ms\"\n>>\n>>\n> I don't understand why you are getting a merge join rather than a hash\n> join. Nor why there is such a big difference between the actual time of\n> the index only scan and of the merge join itself. I would think the two\n> should be about equal. Perhaps I just don't understand the semantics of\n> reported actual time for merge joins.\n>\n> During normal operations, how much of busbase is going to be all_visible\n> at any given time? If that table sees high turnover, this plan might not\n> work well on the production system.\n>\n>\n> Cheers,\n>\n> Jeff\n>\n\nI basically don't have any control over the generated select statement. I'm using Mondrian and that is the select statement that gets passed to Postgres. You're right that if you remove the count(id), the query is faster but I can't do that since the select statement is being executed from Mondrian.\nOn Mon, Mar 25, 2013 at 2:18 PM, Jeff Janes <[email protected]> wrote:\nOn Sat, Mar 23, 2013 at 3:27 PM, Misa Simic <[email protected]> wrote:\n\nHi Jeff,It seems my previous mail has not showed up in the list... copied/pasted again belloewHowever, you said something important:\n\"The join to the \"state\" table is not necessary. Between the foreign key and the primary key, you know that every state exists, and that every state exists only once. But, that will not solve your problem, as the join to the state table is not where the time goes.\"\nI think it is something what planner could/should be \"aware off\"... and discard the join \nI thought that this was on the To Do list (http://wiki.postgresql.org/wiki/Todo) but if it is, I can't find it.\n\nI think the main concern was that it might add substantial planning time to all queries, even ones that would not benefit from it. I don't know if there is a way to address this concern, other then to implement it and see what happens.\n...EXPLAIN ANALYZE\n\nSELECT busbase.state AS c0, count(busbase.id) AS m0 FROM busbase INNER JOIN state USING (state)\nGROUP BY busbase.stateIn the original email, the table definition listed \"id\" twice, once with a not null constraint. If it is truly not null, then this count could be replaced with count(1), in which case the original index on (state) would be sufficient, the composite on (count, id) would not be necessary. (Yes, this is another thing the planner could, in theory, recognize on your behalf)\nBased on the use of column aliases which are less meaningful than the original column names were, I'm assuming that this is generated SQL that you have no control over?\n\non created composite index \n\nCREATE INDEX comp_statidx2\n ON busbase USING btree (state, id );\n \nwe got:\n\"GroupAggregate (cost=0.00..2610570.81 rows=51 width=3) (actual time=98.923..51033.888 rows=51 loops=1)\"\n\" -> Merge Join (cost=0.00..2310285.02 rows=60057056 width=3) (actual time=38.424..41992.070 rows=60057057 loops=1)\"\" Merge Cond: (state.state = busbase.state)\"\n\" -> Index Only Scan using state_pkey on state (cost=0.00..13.02 rows=51 width=3) (actual time=0.008..0.148 rows=51 loops=1)\"\" Heap Fetches: 51\"\n\" -> Index Only Scan using comp_statidx2 on busbase (cost=0.00..1559558.68 rows=60057056 width=3) (actual time=38.408..12883.575 rows=60057057 loops=1)\"\n\n\n\" Heap Fetches: 0\"\"Total runtime: 51045.648 ms\"I don't understand why you are getting a merge join rather than a hash join. Nor why there is such a big difference between the actual time of the index only scan and of the merge join itself. I would think the two should be about equal. Perhaps I just don't understand the semantics of reported actual time for merge joins.\nDuring normal operations, how much of busbase is going to be all_visible at any given time? If that table sees high turnover, this plan might not work well on the production system.\n Cheers,Jeff",
"msg_date": "Mon, 25 Mar 2013 14:55:25 -0400",
"msg_from": "Cindy Makarowsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of query"
}
] |
[
{
"msg_contents": "HI,\n\nI have a wierd problem with PostgreSQL planner...\n\nProblem showed up in Production on PG9.1 (Ubuntu)\n\nBut I have succeeded to get the same behavior on my PG 9.2 on Windows...\n\nit is about 3 tables & onad one view - but view have volatile function:\n\n\nCREATE TABLE t1\n(\n calc_id serial NOT NULL,\n thing_id integer,\n CONSTRAINT t1_pk PRIMARY KEY (calc_id)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE t1\n OWNER TO postgres;\n\n-- Index: t1_thing_id_idx\n\n-- DROP INDEX t1_thing_id_idx;\n\nCREATE INDEX t1_thing_id_idx\n ON t1\n USING btree\n (thing_id);\n\n\nother columns from this real table are discarted - and not important, what\nis important is that in the moment I want to run the query... I know\ncalc_id (pk of this table - but don't know thing_id)...\n\nto simplify test I filled t1 with 100 rows with same values in calc_id and\nthing_id...\n\nSecond table are transactions about things:\n\nCREATE TABLE t2\n(\n trans_id serial NOT NULL,\n thing_id integer,\n no_index integer,\n CONSTRAINT t2_pk PRIMARY KEY (trans_id)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE t2\n OWNER TO postgres;\n\n-- Index: t5_c2_idx\n\n-- DROP INDEX t5_c2_idx;\n\nCREATE INDEX t5_c2_idx\n ON t2\n USING btree\n (thing_id);\n\n\nthis table I have filled with 1m rows with rundom number in thing_id\nbetween 1 and 100\n\nwhen we enter transaction about thing to t2, in some moment we could have\nadditional info about the thing, in some moment not... so if we have\nadditional info in the same time row is inserted in t2 and t3 with the same\ntrans_id...\n\nCREATE TABLE t3\n(\n trans_id integer NOT NULL,\n c2_text text,\n CONSTRAINT t3_pk PRIMARY KEY (trans_id)\n)\nWITH (\n OIDS=FALSE\n);\n\nno additional indexes on t3...\n\nnow we have made a view:\n\nCREATE OR REPLACE VIEW t2_left_t3_volatile AS\n SELECT t2.trans_id, t2.thing_id, t2.no_index, t3.c2_text, random() AS\nrandom\n FROM t2\n LEFT JOIN t3 USING (trans_id);\n\nAnd here we go:\n\nwe want see all transactions about the thing_id\n\nEXPLAIN ANALYZE\nSELECT * FROM t2_left_t3_volatile\nWHERE thing_id = 20\n\neverything is fine:\n\n\"Hash Left Join (cost=452.46..13067.16 rows=12474 width=45) (actual\ntime=6.537..62.633 rows=12038 loops=1)\"\n\" Hash Cond: (t2.trans_id = t3.trans_id)\"\n\" -> Bitmap Heap Scan on t2 (cost=448.30..12985.03 rows=12474 width=12)\n(actual time=6.418..57.498 rows=12038 loops=1)\"\n\" Recheck Cond: (thing_id = 20)\"\n\" -> Bitmap Index Scan on t5_c2_idx (cost=0.00..445.18 rows=12474\nwidth=0) (actual time=4.429..4.429 rows=12038 loops=1)\"\n\" Index Cond: (thing_id = 20)\"\n\" -> Hash (cost=2.96..2.96 rows=96 width=37) (actual time=0.086..0.086\nrows=96 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 7kB\"\n\" -> Seq Scan on t3 (cost=0.00..2.96 rows=96 width=37) (actual\ntime=0.016..0.045 rows=96 loops=1)\"\n\"Total runtime: 63.217 ms\"\n\nbut problem is - we don't know the thing id - we know calc_id:\n\nEXPLAIN ANALYZE\nSELECT * FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)\nWHERE calc_id = 20\n\nand planner picks:\n\n\"Hash Join (cost=6.42..48367.52 rows=12111 width=4) (actual\ntime=0.261..471.042 rows=12038 loops=1)\"\n\" Hash Cond: (t2.thing_id = t1.thing_id)\"\n\" -> Hash Left Join (cost=4.16..31591.51 rows=1211101 width=45) (actual\ntime=0.161..394.076 rows=1211101 loops=1)\"\n\" Hash Cond: (t2.trans_id = t3.trans_id)\"\n\" -> Seq Scan on t2 (cost=0.00..24017.01 rows=1211101 width=12)\n(actual time=0.075..140.937 rows=1211101 loops=1)\"\n\" -> Hash (cost=2.96..2.96 rows=96 width=37) (actual\ntime=0.069..0.069 rows=96 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 7kB\"\n\" -> Seq Scan on t3 (cost=0.00..2.96 rows=96 width=37)\n(actual time=0.008..0.035 rows=96 loops=1)\"\n\" -> Hash (cost=2.25..2.25 rows=1 width=4) (actual time=0.035..0.035\nrows=1 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 1kB\"\n\" -> Seq Scan on t1 (cost=0.00..2.25 rows=1 width=4) (actual\ntime=0.017..0.030 rows=1 loops=1)\"\n\" Filter: (calc_id = 20)\"\n\" Rows Removed by Filter: 99\"\n\"Total runtime: 471.505 ms\"\n\nSeq scan on all tables...\n\nFirst thought was - maybe because of volatile function...\n\nbut on:\nSELECT v.no_index FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)\nWHERE calc_id = 20\n\nplanner picks the same scenario... even function column is not in the\nquery...\n\nhowever, situation is fine, if we have a view without the volatile function:\n\nCREATE OR REPLACE VIEW t2_left_t3 AS\n SELECT t2.trans_id, t2.thing_id, t2.no_index, t3.c2_text\n FROM t2\n LEFT JOIN t3 USING (trans_id);\n\nEXPLAIN ANALYZE\nSELECT v.no_index FROM t2_left_t3 v INNER JOIN t1 USING (thing_id)\nWHERE calc_id = 20\n\n\n\"Nested Loop (cost=437.49..13047.74 rows=12111 width=4) (actual\ntime=6.360..71.818 rows=12038 loops=1)\"\n\" -> Seq Scan on t1 (cost=0.00..2.25 rows=1 width=4) (actual\ntime=0.016..0.024 rows=1 loops=1)\"\n\" Filter: (calc_id = 20)\"\n\" Rows Removed by Filter: 99\"\n\" -> Bitmap Heap Scan on t2 (cost=437.49..12924.38 rows=12111 width=12)\n(actual time=6.330..69.063 rows=12038 loops=1)\"\n\" Recheck Cond: (thing_id = t1.thing_id)\"\n\" -> Bitmap Index Scan on t5_c2_idx (cost=0.00..434.46 rows=12111\nwidth=0) (actual time=4.372..4.372 rows=12038 loops=1)\"\n\" Index Cond: (thing_id = t1.thing_id)\"\n\"Total runtime: 72.461 ms\"\n\nAny idea why planner picks bad plan if there is VOLATILE function?\n\nthere are no difference in result between:\n\nSELECT v.no_index, random FROM t2_left_t3_volatile v INNER JOIN t1 USING\n(thing_id)\nWHERE calc_id = 20\n\nAnd\n\nSELECT v.no_index, random() FROM t2_left_t3 v INNER JOIN t1 USING (thing_id)\nWHERE calc_id = 20\n\nbut huge difference in plan...\n\nAnd logically there is no diff to (our solution)\n\n\nEXPLAIN ANALYZE\nSELECT * FROM t2_left_t3_volatile\nWHERE thing_id = (SELECT thing_id FROM t1 WHERE calc_id = 20)\n\n\nthough real scenario is a lot more complex... i.e. t1 has start_date and\nend_date...\nt3 has date colummn as well\n\nso on simple question:\n\nSELECT * FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)\nWHERE calc_id = 20 AND v.date BETWEEN t1.start_date AND t2.end_date\n\nWe would need to write 3 subqueries on the same table to dont use join...\n\nbut to dont use 3 times subquery... we use CTE\n\nWITH calc AS\n(\nSELECT thing_id FROM t1 WHERE calc_id = 20\n)\nSELECT * FROM t2_left_t3_volatile v\nWHERE v.thing_id=calc.thing_id AND v.date BETWEEN calc.start_date AND\ncalc.end_date\n\nAnd result is acceptable...\n\nBut solution is not good enough - it means, whenever we meet problem with\nperfomance (in production - unfortunatelly) - we will need to spend time to\nredefine - simple queries! :(\n\nNow I am not sure - is this for perform or hackers list...\n\nAny suggestion what we can do to improve things?\n\nOr Any insights that things with planner inside Postgres will be improved\nin \"reasonable time\" - whatever it means :) :)\n\nThanks in advance,\n\nMisa\n\nHI,I have a wierd problem with PostgreSQL planner...Problem showed up in Production on PG9.1 (Ubuntu)But I have succeeded to get the same behavior on my PG 9.2 on Windows...\nit is about 3 tables & onad one view - but view have volatile function:CREATE TABLE t1( calc_id serial NOT NULL,\n thing_id integer, CONSTRAINT t1_pk PRIMARY KEY (calc_id))WITH ( OIDS=FALSE);ALTER TABLE t1 OWNER TO postgres;\n-- Index: t1_thing_id_idx-- DROP INDEX t1_thing_id_idx;CREATE INDEX t1_thing_id_idx ON t1 USING btree (thing_id);\nother columns from this real table are discarted - and not important, what is important is that in the moment I want to run the query... I know calc_id (pk of this table - but don't know thing_id)...\nto simplify test I filled t1 with 100 rows with same values in calc_id and thing_id...Second table are transactions about things:\nCREATE TABLE t2( trans_id serial NOT NULL, thing_id integer, no_index integer, CONSTRAINT t2_pk PRIMARY KEY (trans_id))WITH (\n OIDS=FALSE);ALTER TABLE t2 OWNER TO postgres;-- Index: t5_c2_idx-- DROP INDEX t5_c2_idx;CREATE INDEX t5_c2_idx\n ON t2 USING btree (thing_id);this table I have filled with 1m rows with rundom number in thing_id between 1 and 100\nwhen we enter transaction about thing to t2, in some moment we could have additional info about the thing, in some moment not... so if we have additional info in the same time row is inserted in t2 and t3 with the same trans_id...\nCREATE TABLE t3( trans_id integer NOT NULL, c2_text text, CONSTRAINT t3_pk PRIMARY KEY (trans_id))WITH (\n OIDS=FALSE);no additional indexes on t3...now we have made a view:CREATE OR REPLACE VIEW t2_left_t3_volatile AS \n SELECT t2.trans_id, t2.thing_id, t2.no_index, t3.c2_text, random() AS random FROM t2 LEFT JOIN t3 USING (trans_id);And here we go:\nwe want see all transactions about the thing_idEXPLAIN ANALYZESELECT * FROM t2_left_t3_volatileWHERE thing_id = 20\neverything is fine:\"Hash Left Join (cost=452.46..13067.16 rows=12474 width=45) (actual time=6.537..62.633 rows=12038 loops=1)\"\" Hash Cond: (t2.trans_id = t3.trans_id)\"\n\" -> Bitmap Heap Scan on t2 (cost=448.30..12985.03 rows=12474 width=12) (actual time=6.418..57.498 rows=12038 loops=1)\"\" Recheck Cond: (thing_id = 20)\"\" -> Bitmap Index Scan on t5_c2_idx (cost=0.00..445.18 rows=12474 width=0) (actual time=4.429..4.429 rows=12038 loops=1)\"\n\" Index Cond: (thing_id = 20)\"\" -> Hash (cost=2.96..2.96 rows=96 width=37) (actual time=0.086..0.086 rows=96 loops=1)\"\" Buckets: 1024 Batches: 1 Memory Usage: 7kB\"\n\" -> Seq Scan on t3 (cost=0.00..2.96 rows=96 width=37) (actual time=0.016..0.045 rows=96 loops=1)\"\"Total runtime: 63.217 ms\"but problem is - we don't know the thing id - we know calc_id:\nEXPLAIN ANALYZESELECT * FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)WHERE calc_id = 20and planner picks:\n\"Hash Join (cost=6.42..48367.52 rows=12111 width=4) (actual time=0.261..471.042 rows=12038 loops=1)\"\" Hash Cond: (t2.thing_id = t1.thing_id)\"\" -> Hash Left Join (cost=4.16..31591.51 rows=1211101 width=45) (actual time=0.161..394.076 rows=1211101 loops=1)\"\n\" Hash Cond: (t2.trans_id = t3.trans_id)\"\" -> Seq Scan on t2 (cost=0.00..24017.01 rows=1211101 width=12) (actual time=0.075..140.937 rows=1211101 loops=1)\"\n\" -> Hash (cost=2.96..2.96 rows=96 width=37) (actual time=0.069..0.069 rows=96 loops=1)\"\" Buckets: 1024 Batches: 1 Memory Usage: 7kB\"\" -> Seq Scan on t3 (cost=0.00..2.96 rows=96 width=37) (actual time=0.008..0.035 rows=96 loops=1)\"\n\" -> Hash (cost=2.25..2.25 rows=1 width=4) (actual time=0.035..0.035 rows=1 loops=1)\"\" Buckets: 1024 Batches: 1 Memory Usage: 1kB\"\" -> Seq Scan on t1 (cost=0.00..2.25 rows=1 width=4) (actual time=0.017..0.030 rows=1 loops=1)\"\n\" Filter: (calc_id = 20)\"\" Rows Removed by Filter: 99\"\"Total runtime: 471.505 ms\"Seq scan on all tables...\nFirst thought was - maybe because of volatile function...but on:SELECT v.no_index FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)\nWHERE calc_id = 20planner picks the same scenario... even function column is not in the query...however, situation is fine, if we have a view without the volatile function:\nCREATE OR REPLACE VIEW t2_left_t3 AS SELECT t2.trans_id, t2.thing_id, t2.no_index, t3.c2_text FROM t2 LEFT JOIN t3 USING (trans_id);\nEXPLAIN ANALYZESELECT v.no_index FROM t2_left_t3 v INNER JOIN t1 USING (thing_id)WHERE calc_id = 20\"Nested Loop (cost=437.49..13047.74 rows=12111 width=4) (actual time=6.360..71.818 rows=12038 loops=1)\"\n\" -> Seq Scan on t1 (cost=0.00..2.25 rows=1 width=4) (actual time=0.016..0.024 rows=1 loops=1)\"\" Filter: (calc_id = 20)\"\" Rows Removed by Filter: 99\"\n\" -> Bitmap Heap Scan on t2 (cost=437.49..12924.38 rows=12111 width=12) (actual time=6.330..69.063 rows=12038 loops=1)\"\" Recheck Cond: (thing_id = t1.thing_id)\"\" -> Bitmap Index Scan on t5_c2_idx (cost=0.00..434.46 rows=12111 width=0) (actual time=4.372..4.372 rows=12038 loops=1)\"\n\" Index Cond: (thing_id = t1.thing_id)\"\"Total runtime: 72.461 ms\"Any idea why planner picks bad plan if there is VOLATILE function?\nthere are no difference in result between:SELECT v.no_index, random FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)WHERE calc_id = 20\nAndSELECT v.no_index, random() FROM t2_left_t3 v INNER JOIN t1 USING (thing_id)WHERE calc_id = 20\nbut huge difference in plan...And logically there is no diff to (our solution)EXPLAIN ANALYZESELECT * FROM t2_left_t3_volatile\nWHERE thing_id = (SELECT thing_id FROM t1 WHERE calc_id = 20)though real scenario is a lot more complex... i.e. t1 has start_date and end_date...t3 has date colummn as well\nso on simple question:SELECT * FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)WHERE calc_id = 20 AND v.date BETWEEN t1.start_date AND t2.end_date\nWe would need to write 3 subqueries on the same table to dont use join...but to dont use 3 times subquery... we use CTE\nWITH calc AS(SELECT thing_id FROM t1 WHERE calc_id = 20)SELECT * FROM t2_left_t3_volatile vWHERE v.thing_id=calc.thing_id AND v.date BETWEEN calc.start_date AND calc.end_date\nAnd result is acceptable...But solution is not good enough - it means, whenever we meet problem with perfomance (in production - unfortunatelly) - we will need to spend time to redefine - simple queries! :(\nNow I am not sure - is this for perform or hackers list...Any suggestion what we can do to improve things?\nOr Any insights that things with planner inside Postgres will be improved in \"reasonable time\" - whatever it means :) :)Thanks in advance,\nMisa",
"msg_date": "Sun, 24 Mar 2013 01:12:12 +0100",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL planner"
},
{
"msg_contents": "On Sat, Mar 23, 2013 at 8:12 PM, Misa Simic <[email protected]> wrote:\n> but problem is - we don't know the thing id - we know calc_id:\n>\n> EXPLAIN ANALYZE\n> SELECT * FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)\n> WHERE calc_id = 20\n\nWith this query you've got to scan all three tables. The calc_id qual\ncan only be pushed down into the scan on t1, so you need the whole\nt2/t3 join product.\n\n> EXPLAIN ANALYZE\n> SELECT v.no_index FROM t2_left_t3 v INNER JOIN t1 USING (thing_id)\n> WHERE calc_id = 20\n\nWith this query you only need to scan 2 tables. The join between t2\nand t3 is eliminated by the join removal code in favor of scanning\nonly t2, as shown in the plan you included:\n\n> \"Nested Loop (cost=437.49..13047.74 rows=12111 width=4) (actual\n> time=6.360..71.818 rows=12038 loops=1)\"\n> \" -> Seq Scan on t1 (cost=0.00..2.25 rows=1 width=4) (actual\n> time=0.016..0.024 rows=1 loops=1)\"\n> \" Filter: (calc_id = 20)\"\n> \" Rows Removed by Filter: 99\"\n> \" -> Bitmap Heap Scan on t2 (cost=437.49..12924.38 rows=12111 width=12)\n> (actual time=6.330..69.063 rows=12038 loops=1)\"\n> \" Recheck Cond: (thing_id = t1.thing_id)\"\n> \" -> Bitmap Index Scan on t5_c2_idx (cost=0.00..434.46 rows=12111\n> width=0) (actual time=4.372..4.372 rows=12038 loops=1)\"\n> \" Index Cond: (thing_id = t1.thing_id)\"\n> \"Total runtime: 72.461 ms\"\n\nThe difference is that this query has only one column in its target list, not *.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 10 May 2013 15:57:15 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL planner"
},
{
"msg_contents": "On Friday, May 10, 2013, Robert Haas wrote:\n\n> On Sat, Mar 23, 2013 at 8:12 PM, Misa Simic <[email protected]<javascript:;>>\n> wrote:\n> > but problem is - we don't know the thing id - we know calc_id:\n> >\n> > EXPLAIN ANALYZE\n> > SELECT * FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)\n> > WHERE calc_id = 20\n>\n> With this query you've got to scan all three tables. The calc_id qual\n> can only be pushed down into the scan on t1, so you need the whole\n> t2/t3 join product.\n>\n> > EXPLAIN ANALYZE\n> > SELECT v.no_index FROM t2_left_t3 v INNER JOIN t1 USING (thing_id)\n> > WHERE calc_id = 20\n>\n> With this query you only need to scan 2 tables. The join between t2\n> and t3 is eliminated by the join removal code in favor of scanning\n> only t2, as shown in the plan you included:\n>\n> > \"Nested Loop (cost=437.49..13047.74 rows=12111 width=4) (actual\n> > time=6.360..71.818 rows=12038 loops=1)\"\n> > \" -> Seq Scan on t1 (cost=0.00..2.25 rows=1 width=4) (actual\n> > time=0.016..0.024 rows=1 loops=1)\"\n> > \" Filter: (calc_id = 20)\"\n> > \" Rows Removed by Filter: 99\"\n> > \" -> Bitmap Heap Scan on t2 (cost=437.49..12924.38 rows=12111\n> width=12)\n> > (actual time=6.330..69.063 rows=12038 loops=1)\"\n> > \" Recheck Cond: (thing_id = t1.thing_id)\"\n> > \" -> Bitmap Index Scan on t5_c2_idx (cost=0.00..434.46\n> rows=12111\n> > width=0) (actual time=4.372..4.372 rows=12038 loops=1)\"\n> > \" Index Cond: (thing_id = t1.thing_id)\"\n> > \"Total runtime: 72.461 ms\"\n>\n> The difference is that this query has only one column in its target list,\n> not *.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\nThanks Robert,\n\nThat is a bit \"old\" problem to us...\n\nSolution for that kind of problems is: \"rephrase\" the question. So we have\nadded one more layer to transform input query to \"better\" query for\npostgres - very wierd...\n\nHowever, there are no differences... Planer use the same bad plan for:\n\n SELECT v.no_index FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)\nWHERE calc_id = 20\n\nIn that is one column as well...\n\nWe basicaly above query transform to:\n\nSELECT v.no_index FROM t2_left_t3_volatile v\nWHERE v.thing_id = (\nSELECT thing_id FROM t1\nWHERE calc_id = 20\n)\n\nWhat give us good result... Very wierd....\n\nThanks,\nMisa\n\nOn Friday, May 10, 2013, Robert Haas wrote:On Sat, Mar 23, 2013 at 8:12 PM, Misa Simic <[email protected]> wrote:\n\n> but problem is - we don't know the thing id - we know calc_id:\n>\n> EXPLAIN ANALYZE\n> SELECT * FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)\n> WHERE calc_id = 20\n\nWith this query you've got to scan all three tables. The calc_id qual\ncan only be pushed down into the scan on t1, so you need the whole\nt2/t3 join product.\n\n> EXPLAIN ANALYZE\n> SELECT v.no_index FROM t2_left_t3 v INNER JOIN t1 USING (thing_id)\n> WHERE calc_id = 20\n\nWith this query you only need to scan 2 tables. The join between t2\nand t3 is eliminated by the join removal code in favor of scanning\nonly t2, as shown in the plan you included:\n\n> \"Nested Loop (cost=437.49..13047.74 rows=12111 width=4) (actual\n> time=6.360..71.818 rows=12038 loops=1)\"\n> \" -> Seq Scan on t1 (cost=0.00..2.25 rows=1 width=4) (actual\n> time=0.016..0.024 rows=1 loops=1)\"\n> \" Filter: (calc_id = 20)\"\n> \" Rows Removed by Filter: 99\"\n> \" -> Bitmap Heap Scan on t2 (cost=437.49..12924.38 rows=12111 width=12)\n> (actual time=6.330..69.063 rows=12038 loops=1)\"\n> \" Recheck Cond: (thing_id = t1.thing_id)\"\n> \" -> Bitmap Index Scan on t5_c2_idx (cost=0.00..434.46 rows=12111\n> width=0) (actual time=4.372..4.372 rows=12038 loops=1)\"\n> \" Index Cond: (thing_id = t1.thing_id)\"\n> \"Total runtime: 72.461 ms\"\n\nThe difference is that this query has only one column in its target list, not *.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\nThanks Robert,That is a bit \"old\" problem to us...Solution for that kind of problems is: \"rephrase\" the question. So we have added one more layer to transform input query to \"better\" query for postgres - very wierd...\nHowever, there are no differences... Planer use the same bad plan for: SELECT v.no_index FROM t2_left_t3_volatile v INNER JOIN t1 USING (thing_id)\nWHERE calc_id = 20In that is one column as well...We basicaly above query transform to:\nSELECT v.no_index FROM t2_left_t3_volatile v WHERE v.thing_id = ( \nSELECT thing_id FROM t1 WHERE calc_id = 20\n)What give us good result... Very wierd....\nThanks,Misa",
"msg_date": "Sat, 11 May 2013 01:12:37 +0200",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL planner"
}
] |
[
{
"msg_contents": "Hi,\n\nI recently upgraded PostgreSQL from 9.0.12 to 9.2.3 on a test server to compare performance. I'm using pgbench to measure which results in around a 60% reduction. \n\nThe non-default configuration remains identical between versions except archive_command (different location) and custom_variable_classes (no longer supported) and are detailed are below. Is there some updated default configuration that I'm missing? Perhaps it's because of the new cascading replication feature? I've tried tweaking the memory settings to no avail.\n\nThe Linux server is on a cloud and has 4GB RAM and 2 CPUs and the same server is running both master and slave (these are separate in production). If you'd like any more details please ask. Here are the pgbench results:\n\n\nPostgreSQL 9.0.12\n-----------------\n/usr/pgsql-9.0/bin/pgbench -c 4 -t 20000 pgbench\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 1\nnumber of transactions per client: 20000\nnumber of transactions actually processed: 80000/80000\ntps = 140.784635 (including connections establishing)\ntps = 140.789389 (excluding connections establishing)\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 1\nnumber of transactions per client: 20000\nnumber of transactions actually processed: 80000/80000\ntps = 142.027320 (including connections establishing)\ntps = 142.032815 (excluding connections establishing)\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 1\nnumber of transactions per client: 20000\nnumber of transactions actually processed: 80000/80000\ntps = 150.745875 (including connections establishing)\ntps = 150.750959 (excluding connections establishing)\n\n\nPostgreSQL 9.2.3\n-----------------\n/usr/pgsql-9.2/bin/pgbench -c 4 -t 20000 pgbench\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 1\nnumber of transactions per client: 20000\nnumber of transactions actually processed: 80000/80000\ntps = 60.273767 (including connections establishing)\ntps = 60.274429 (excluding connections establishing)\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 1\nnumber of transactions per client: 20000\nnumber of transactions actually processed: 80000/80000\ntps = 57.634077 (including connections establishing)\ntps = 57.634847 (excluding connections establishing)\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 1\nnumber of transactions per client: 20000\nnumber of transactions actually processed: 80000/80000\ntps = 60.516492 (including connections establishing)\ntps = 60.517250 (excluding connections establishing)\n\n\nThe non-default configuration items\n-----------------\nlisten_addresses = '*'\t\t # what IP address(es) to listen on;\nmax_connections = 100\t\t\t# (change requires restart)\nshared_buffers = 256MB\t\t\t# min 128kB\ntemp_buffers = 128MB\t\t\t# min 800kB\nwork_mem = 32MB\t\t\t\t# min 64kB\nmaintenance_work_mem = 128MB\t\t# min 1MB\nmax_stack_depth = 8MB\t\t\t# min 100kB\nwal_level = hot_standby\t\t\t# minimal, archive, or hot_standby\ncheckpoint_segments = 3\t\t# in logfile segments, min 1, 16MB each\ncheckpoint_completion_target = 0.9\t# checkpoint target duration, 0.0 - 1.0\narchive_mode = on\t\t# allows archiving to be done\narchive_command = 'cp -f %p /var/lib/pgsql/9.0/archive/%f </dev/null'\t\t# command to use to archive a logfile segment\nmax_wal_senders = 10\t\t# max number of walsender processes\neffective_cache_size = 1GB\ndefault_statistics_target = 2000\t# range 1-10000\nlog_destination = 'stderr'\t\t# Valid values are combinations of\nlogging_collector = on\t\t\t# Enable capturing of stderr and csvlog\nlog_directory = 'pg_log'\t\t# directory where log files are written,\nlog_filename = 'postgresql-%a.log'\t# log file name pattern,\nlog_truncate_on_rotation = on\t\t# If on, an existing log file of the\nlog_rotation_age = 1d\t\t\t# Automatic rotation of logfiles will\nlog_rotation_size = 0\t\t\t# Automatic rotation of logfiles will\nlog_min_duration_statement = 5000\t# -1 is disabled, 0 logs all statements\nlog_checkpoints = on\nlog_line_prefix = '%t:%r:%u@%d:[%p]: '\t# special values:\nlog_statement = 'none'\t\t\t# none, ddl, mod, all\nautovacuum = on # changed by pgb_test for pgbench testing\nlog_autovacuum_min_duration = 100\t# -1 disables, 0 logs all actions and\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8'\t\t\t# locale for system error message\nlc_monetary = 'en_US.UTF-8'\t\t\t# locale for monetary formatting\nlc_numeric = 'en_US.UTF-8'\t\t\t# locale for number formatting\nlc_time = 'en_US.UTF-8'\t\t\t\t# locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\nshared_preload_libraries = 'auto_explain'\ncustom_variable_classes = 'auto_explain'\nauto_explain.log_min_duration = '15s'\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Mar 2013 09:57:34 +0000",
"msg_from": "Colin Currie <[email protected]>",
"msg_from_op": true,
"msg_subject": "9.2.3 upgrade reduced pgbench performance by 60%"
},
{
"msg_contents": "\n> The Linux server is on a cloud and has 4GB RAM and 2 CPUs and the\n> same server is running both master and slave (these are separate in\n> production). If you'd like any more details please ask. Here are the\n> pgbench results:\n\nPresumably you created a new cloud server for 9.2, yes? I'd guess that\nthe difference is between the two cloud servers. Try testing, for\nexample, bonnie++ on the two servers.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Mar 2013 13:34:24 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.2.3 upgrade reduced pgbench performance by 60%"
},
{
"msg_contents": "> -----Original Message-----\r\n> From: [email protected] [mailto:pgsql-\r\n> [email protected]] On Behalf Of Josh Berkus\r\n> Sent: Monday, March 25, 2013 4:34 PM\r\n> To: [email protected]\r\n> Subject: Re: [PERFORM] 9.2.3 upgrade reduced pgbench performance by\r\n> 60%\r\n> \r\n> \r\n> > The Linux server is on a cloud and has 4GB RAM and 2 CPUs and the same\r\n> > server is running both master and slave (these are separate in\r\n> > production). If you'd like any more details please ask. Here are the\r\n> > pgbench results:\r\n> \r\n> Presumably you created a new cloud server for 9.2, yes? I'd guess that the\r\n> difference is between the two cloud servers. Try testing, for example,\r\n> bonnie++ on the two servers.\r\n> \r\n\r\nI saw some similar results comparing 9.0 and 9.2 pgbench tests. My tests were on a VM, but on a dedicate host/hardware with no other VM's running on it to minimize variables. I didn't have a lot of time to dig into it, but I do recall seeing more lock contention on updates on the 9.2 instance though.\r\n\r\n\r\nBrad. \r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Mar 2013 20:05:39 +0000",
"msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.2.3 upgrade reduced pgbench performance by 60%"
},
{
"msg_contents": "\n> \n> I saw some similar results comparing 9.0 and 9.2 pgbench tests. My tests were on a VM, but on a dedicate host/hardware with no other VM's running on it to minimize variables. I didn't have a lot of time to dig into it, but I do recall seeing more lock contention on updates on the 9.2 instance though.\n\nAh, good point. At that scale, Colin's test is mostly a contention\ntest. There's something there worth investigating, but it's not a\nrealistic use case.\n\nColin, please try your pgbench tests with -s 100.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 Mar 2013 15:52:10 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.2.3 upgrade reduced pgbench performance by 60%"
},
{
"msg_contents": "On Monday, March 25, 2013, Colin Currie wrote:\n\n> Hi,\n>\n> I recently upgraded PostgreSQL from 9.0.12 to 9.2.3 on a test server to\n> compare performance. I'm using pgbench to measure which results in around a\n> 60% reduction.\n>\n> The non-default configuration remains identical between versions except\n> archive_command (different location) and custom_variable_classes (no longer\n> supported) and are detailed are below. Is there some updated default\n> configuration that I'm missing? Perhaps it's because of the new cascading\n> replication feature? I've tried tweaking the memory settings to no avail.\n>\n> The Linux server is on a cloud and has 4GB RAM and 2 CPUs and the same\n> server is running both master and slave (these are separate in production).\n\n\nWhat does your recovery.conf look like? What if you don't run the slave at\nall, then how do they compare?\n\n Cheers,\n\nJeff\n\nOn Monday, March 25, 2013, Colin Currie wrote:Hi,\n\nI recently upgraded PostgreSQL from 9.0.12 to 9.2.3 on a test server to compare performance. I'm using pgbench to measure which results in around a 60% reduction.\n\nThe non-default configuration remains identical between versions except archive_command (different location) and custom_variable_classes (no longer supported) and are detailed are below. Is there some updated default configuration that I'm missing? Perhaps it's because of the new cascading replication feature? I've tried tweaking the memory settings to no avail.\n\nThe Linux server is on a cloud and has 4GB RAM and 2 CPUs and the same server is running both master and slave (these are separate in production). What does your recovery.conf look like? What if you don't run the slave at all, then how do they compare?\n Cheers,Jeff",
"msg_date": "Sun, 31 Mar 2013 10:16:50 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.2.3 upgrade reduced pgbench performance by 60%"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm looking at this query plan, an excerpt of which is shown here, and I am\njust wondering how the estimated cost for the Nested Loop is calculated?\n\n-> Nested Loop (*cost=0.00..2888.16* rows=240 width=16) (actual\ntime=0.034..2.180 rows=91 loops=1)\n Output: public.mg.lctime, public.mg.gfid\n -> Index Scan using _\nba6cf7271af37e26c0e09e3225369f1b on version (*cost=0.00..958.13* rows=240\nwidth=8) (actual time=0.013..0.318 rows=91 loops=1)\n Output: version.id, version.gfid, version.tid, version.seq,\nversion.txsid, version.objectid\n -> Index Scan using mgfid__uniq on mg (*cost=0.00..8.03* rows=1\nwidth=16) (actual time=0.005..0.008 rows=1 loops=91)\n Output: public.mg.id, public.mg.gfid, public.mg.ftype,\npublic.mg.lctime\n Index Cond: (public.mg.gfid = version.gfid)\n Filter: (public.mg.lctime < 1363076849)\n\nWhat I expected is that it would be the sum of the output cost of the two\nindex scans?\nI have no clue how it came up with 2,888.\n\nThank you.\n\nHi all,I'm looking at this query plan, an excerpt of which \nis shown here, and I am just wondering how the estimated cost for the \nNested Loop is calculated?-> Nested Loop (cost=0.00..2888.16 rows=240 width=16) (actual time=0.034..2.180 rows=91 loops=1)\n Output: public.mg.lctime, public.mg.gfid -> Index Scan using _ba6cf7271af37e26c0e09e3225369f1b on version (cost=0.00..958.13 rows=240 width=8) (actual time=0.013..0.318 rows=91 loops=1)\n Output: version.id, version.gfid, version.tid, version.seq, version.txsid, version.objectid\n -> Index Scan using mgfid__uniq on mg (cost=0.00..8.03 rows=1 width=16) (actual time=0.005..0.008 rows=1 loops=91) Output: public.mg.id, public.mg.gfid, public.mg.ftype, public.mg.lctime\n\n Index Cond: (public.mg.gfid = version.gfid) Filter: (public.mg.lctime < 1363076849)What I expected is that it would be the sum of the output cost of the two index scans?I have no clue how it came up with 2,888.\nThank you.",
"msg_date": "Mon, 25 Mar 2013 14:16:51 -0400",
"msg_from": "pg noob <[email protected]>",
"msg_from_op": true,
"msg_subject": "query plan estimate"
},
{
"msg_contents": "pg noob <[email protected]> writes:\n> I'm looking at this query plan, an excerpt of which is shown here, and I am\n> just wondering how the estimated cost for the Nested Loop is calculated?\n\nhttp://www.postgresql.org/docs/9.2/static/using-explain.html\n\nThere's a nestloop example about a third of the way down the page ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 31 Mar 2013 20:52:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan estimate"
}
] |
[
{
"msg_contents": "(following the interest from -hackers, I'm posting this here). \n\nHi folks, \n\nI've always been fascinated with genetic algorithms. Having had a chance to implement it once before, to solve real life issue - I knew they can be brilliant at searching for right solutions in multi dimensional space.\n\nThinking about just the postgresql.conf and number of possible options there to satisfy performance needs - I thought, this does sound like a good example of problem that can be solved using genetic algorithm. \n\nSo I sat down after work for few days, and came up with a simple proof of concept.\nIt generates random population of postgresql configuration files, and runs simple pgbench test on each one of them. It takes the average TPS for 3 consecutive runs as the score that then is applied to each 'guy'. \n\nThen I run a typical - I suppose - cross over operation and slight mutation of each new chromosome - to come up with new population, and so on and so forth. \n\nRunning this for 2 days - I came up to conclusion that it does indeed seem to work, although default pgbench 'test cases' are not really stressing the database enough for it to generate diverse enough populations each time. \n\nAlso, ideally this sort of thing should be run on two or more different hosts. One (master) that just generates new configurations, saves, restores, manages the whole operation - and 'slave' host(s) that run the actual tests.\n\nOne benefit of that would be the fact that genetic algorithms are highly parallelizable. \n\nI did reboot my machines after tests couple times, to test configuration files and to see if the results were in fact repeatable (as much as they can be) - and I have to say, to my surprise - they were. I.e. the configuration files with poor results were still obviously slower then the best ones.\n\nI did include my sample results for everyone to see, including nice spreadsheet with graphs (everyone loves graphs) showing the scores across all populations.\nThe tests were ran on my mac laptops (don’t have access to bunch of servers that I can test things like that on for couple days, sorry).\n\nThe project, including readme file is available to look at: https://github.com/waniek/genpostgresql.conf\n\n\nThings I know so far:\n* I probably need to take into account more configuration options;\n* pgbench with its default test case is not the right characterization suite for this exercise, I need something more life like. I suppose we all have some sort of a characterization suite that could be used here;\n* Code needs a lot work on it, if this was to be used professionally;\n* Just restarting postgresql with different configuration file doesn't really constitute fully proper way to test new configuration files, but it seem to work;\n\n\nI don't expect much out of this - after all this is just a proof of concept. But if there are people out there thinking this can be in any way useful - please give us a shout. \nAlso, if you know something more about genetic algorithms then I do - and can suggest improvement - let me know.\n\nLastly, I'm looking for some more sophisticated pgbench test cases that I could throw in at it. I think in general pgbench as a project could use some more sophisticated benchmarks that should be included with the project, for everyone to see. Perhaps even to run some automated regression tests against git head. \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Mar 2013 20:08:39 +0000",
"msg_from": "Greg Jaskiewicz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proof of concept: Evolving postgresql.conf using genetic algorithm"
}
] |
[
{
"msg_contents": "Folks,\n\nIn the past, setting vacuum_freeze_min_age (vfma) really low (say to\n10000 or 50000) would have caused lots of extra writing work due to\ndirtying extra pages for freezing. This has been our stated reason to\nkeep vfma high, despite the obvious advantage of freezing tuples while\nthey're still in the cache.\n\nWith the visibility map, though, vfma should only be dirtying pages\nwhich vacuum is already visiting because there's dirty tuples on the\npage. That is, pages which vacuum will probably dirty anyway, freezing\nor not. (This is assuming one has applied the 9.2.3 update.)\n\nGiven that, it seems like the cost of lowering vfma *should* be\nmarginal. The only extra work done by a lower vfma should be:\n\n1. extra cpu time to put in the froxenXIDs on vacuumed pages, and\n2. dirtying the minority of pages which vacuum decided to scan, but not\nwrite to.\n\nThe second point is the one where I'm not sure how to evaluate. How\nlikely, as of 9.2, is vacuum to visit a page and not dirty it? And are\nthere other costs I'm not thinking of?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Mar 2013 13:31:17 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Setting vacuum_freeze_min_age really low"
},
{
"msg_contents": "On Mon, Mar 25, 2013 at 4:31 PM, Josh Berkus <[email protected]> wrote:\n> In the past, setting vacuum_freeze_min_age (vfma) really low (say to\n> 10000 or 50000) would have caused lots of extra writing work due to\n> dirtying extra pages for freezing. This has been our stated reason to\n> keep vfma high, despite the obvious advantage of freezing tuples while\n> they're still in the cache.\n\nThat, and Tom's concern about forensics, which I understand to be the\nlarger sticking point.\n\n> With the visibility map, though, vfma should only be dirtying pages\n> which vacuum is already visiting because there's dirty tuples on the\n> page. That is, pages which vacuum will probably dirty anyway, freezing\n> or not. (This is assuming one has applied the 9.2.3 update.)\n\nI think this is probably not accurate, although I'll defer to someone\nwith more real-world experience. I'd guess that it's uncommon for\nactively updated data and very-rarely-updated data to be mixed\ntogether on the same pages with any real regularity. IOW, the dirty\npages probably don't have anything on them that can be frozen anyway.\n\nSo, if the table's age is less than vacuum_freeze_table_age, we'll\nonly scan pages not already marked all-visible. Regardless of vfma,\nwe probably won't freeze much.\n\nOn the other hand, if the table's age is at least\nvacuum_freeze_table_age, we'll scan the whole table and freeze a lotta\nstuff all at once. Again, whether vfma is high or low won't matter\nmuch: it's definitely less than vacuum_freeze_table_age.\n\nBasically, I would guess that both the costs and the benefits of\nchanging this are pretty small. It would be nice to hear from someone\nwho has tried it, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 May 2013 12:09:04 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting vacuum_freeze_min_age really low"
},
{
"msg_contents": "Hi,\n\nOn 2013-03-25 13:31:17 -0700, Josh Berkus wrote:\n> In the past, setting vacuum_freeze_min_age (vfma) really low (say to\n> 10000 or 50000) would have caused lots of extra writing work due to\n> dirtying extra pages for freezing. This has been our stated reason to\n> keep vfma high, despite the obvious advantage of freezing tuples while\n> they're still in the cache.\n> \n> With the visibility map, though, vfma should only be dirtying pages\n> which vacuum is already visiting because there's dirty tuples on the\n> page. That is, pages which vacuum will probably dirty anyway, freezing\n> or not. (This is assuming one has applied the 9.2.3 update.)\n> \n> Given that, it seems like the cost of lowering vfma *should* be\n> marginal. The only extra work done by a lower vfma should be:\n> \n> 1. extra cpu time to put in the froxenXIDs on vacuumed pages, and\n> 2. dirtying the minority of pages which vacuum decided to scan, but not\n> write to.\n\nIt will also often enough lead to a page being frozen repeatedly which\ncauses unneccessary IO and WAL traffic. If a page contains pages from\nseveral transactions its not unlikely that some tuples are older and\nsome are newer than vfma. That scenario isn't unlikely because of two\nscenarios:\n- INSERT/UPDATE reusing space on older pages where tuples have been\n deleted.\n- When a backend extends a relation that page is *not* known to have\n free space to other relations. Until vacuum comes along for the first\n time only this backend will use its space. Given that busy clusters\n frequently burn loads of xids per second it is not uncommon to have a\n wide range of xids on such a page.\n\n> And are there other costs I'm not thinking of?\n\nI think (but am not 100% sure right now) it would have another rather\nbig cost:\nWhen a page contains freezable items, as determined by freeze_min_age,\nand we are doing a full table scan we won't skip buffers that we can't\nlock for cleanup. Instead we will wait and then lock them for\ncleanup. So I think this would be rather noticeably impact the speed of\nvacuum (since it waits more often) and concurrency (since we lock more\nbuffers than before, even if they are actively used).\n\nMakes sense?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 May 2013 19:18:39 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting vacuum_freeze_min_age really low"
},
{
"msg_contents": "On 2013-05-09 12:09:04 -0400, Robert Haas wrote:\n> On Mon, Mar 25, 2013 at 4:31 PM, Josh Berkus <[email protected]> wrote:\n> > In the past, setting vacuum_freeze_min_age (vfma) really low (say to\n> > 10000 or 50000) would have caused lots of extra writing work due to\n> > dirtying extra pages for freezing. This has been our stated reason to\n> > keep vfma high, despite the obvious advantage of freezing tuples while\n> > they're still in the cache.\n> \n> That, and Tom's concern about forensics, which I understand to be the\n> larger sticking point.\n\nFWIW I found having sensible xmin/xmax repeatedly really useful for\ndebugging problems. Most problems don't get noticed within minutes so\nloosing evidence that fast is bad.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 9 May 2013 19:22:05 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting vacuum_freeze_min_age really low"
},
{
"msg_contents": "Robert, Andres,\n\n> That, and Tom's concern about forensics, which I understand to be the\n> larger sticking point.\n\nI don't buy the idea that we should cause regular recurring performance\nissues for all of our users in order to aid diagnosing the kind of\nissues which happen 1% of the time to 2% of our users.\n\n> So, if the table's age is less than vacuum_freeze_table_age, we'll\n> only scan pages not already marked all-visible. Regardless of vfma,\n> we probably won't freeze much.\n\nRight, but the pages which were dirtied *anyway* will get frozen.\n\n> On the other hand, if the table's age is at least\n> vacuum_freeze_table_age, we'll scan the whole table and freeze a lotta\n> stuff all at once. Again, whether vfma is high or low won't matter\n> much: it's definitely less than vacuum_freeze_table_age.\n\nRight.\n\n> Basically, I would guess that both the costs and the benefits of\n> changing this are pretty small. It would be nice to hear from someone\n> who has tried it, though.\n\nWell, I have, but I don't exactly have empirical testing results from\nit. That's really the sticking point here: can we measurably\ndemonstrate that lowering vfma makes autovacuum freeze happen less\noften, and do less work when it does? Realistically, I think that's\nwaiting on me having time to do some lengthy performance testing.\n\n> It will also often enough lead to a page being frozen repeatedly which\n> causes unneccessary IO and WAL traffic. If a page contains pages from\n> several transactions its not unlikely that some tuples are older and\n> some are newer than vfma. That scenario isn't unlikely because of two\n> scenarios:\n\nNobody has yet explained to me where this extra WAL and IO traffic would\ncome from. vfma only takes effect if the page is being vacuumed\n*anyway*. And if the page is being vacuumed anyway, the page is being\nrewritten anyway, and it doesn't matter how many changes we make on that\npage, except as far as CPU time is concerned. As far as IO is\nconcerned, an 8K page is an 8K page. No?\n\nThe only time I can imagine this resulting in extra IO is if vacuum is\nregularly visiting pages which don't have any other work to do, but do\nhave tuples which could be frozen if vfma was lowered. I would tend to\nthink that this would be a tiny minority of pages, but testing may be\nthe only way to answer that.\n\n> When a page contains freezable items, as determined by freeze_min_age,\n> and we are doing a full table scan we won't skip buffers that we can't\n> lock for cleanup. Instead we will wait and then lock them for\n> cleanup. So I think this would be rather noticeably impact the speed of\n> vacuum (since it waits more often) and concurrency (since we lock more\n> buffers than before, even if they are actively used).\n\nWell, that behavior sounds like something we should maybe fix,\nregardless of whether we're lowering the default vfma or not.\n\n--Josh Berkus\n\n\n\n\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 11 May 2013 16:28:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting vacuum_freeze_min_age really low"
},
{
"msg_contents": "Hi Josh,\n\nOn 2013-05-11 16:28:32 -0700, Josh Berkus wrote:\n> > That, and Tom's concern about forensics, which I understand to be the\n> > larger sticking point.\n> \n> I don't buy the idea that we should cause regular recurring performance\n> issues for all of our users in order to aid diagnosing the kind of\n> issues which happen 1% of the time to 2% of our users.\n\nWell. For one you haven't proven that the changed setting actually\nimproves performance. So the comparison isn't really valid. We will\nstill need full table vacuums to be able to change relfrozenxids. Also,\nhe small percentages are the cases where the shit really hit the\nfan. Making sure you have at least some chance of a) diagnosing the\nissue b) recovering data is a pretty good thing.\n\n> > So, if the table's age is less than vacuum_freeze_table_age, we'll\n> > only scan pages not already marked all-visible. Regardless of vfma,\n> > we probably won't freeze much.\n \n> Right, but the pages which were dirtied *anyway* will get frozen.\n\nI think you're missing the fact that we don't neccessarily dirty pages,\njust because vacuum visits them. In a mostly insert workload its not\nuncommon that vacuum doesn't change anything. In many scenarios the\nfirst time vacuum visits a page it cannot yet me marked \"all-visible\"\nyet so we will visit again soon after anyway. And after that there will\nbe regular full table vacuums.\n\n> > It will also often enough lead to a page being frozen repeatedly which\n> > causes unneccessary IO and WAL traffic. If a page contains pages from\n> > several transactions its not unlikely that some tuples are older and\n> > some are newer than vfma. That scenario isn't unlikely because of two\n> > scenarios:\n> \n> Nobody has yet explained to me where this extra WAL and IO traffic would\n> come from. vfma only takes effect if the page is being vacuumed\n> *anyway*.\n\nThere's multiple points here:\na) we don't necessarily write/dirty anything if vacuum doesn't find\n anything to do\nb) freezing tuples requires a xlog_heap_freeze wal record to be\n emitted. If we don't freeze, we don't need to emit it.\n\n> And if the page is being vacuumed anyway, the page is being\n> rewritten anyway, and it doesn't matter how many changes we make on that\n> page, except as far as CPU time is concerned. As far as IO is\n> concerned, an 8K page is an 8K page. No?\n\nSure, *if* we writeout the page, it doesn't matter at all whether we\nchanged one byte or all of them. Unless it also requires extra xlog\nrecords to be emitted.\n\n> The only time I can imagine this resulting in extra IO is if vacuum is\n> regularly visiting pages which don't have any other work to do, but do\n> have tuples which could be frozen if vfma was lowered. I would tend to\n> think that this would be a tiny minority of pages, but testing may be\n> the only way to answer that.\n\nINSERT only produces workloads like that.\n\n> > When a page contains freezable items, as determined by freeze_min_age,\n> > and we are doing a full table scan we won't skip buffers that we can't\n> > lock for cleanup. Instead we will wait and then lock them for\n> > cleanup. So I think this would be rather noticeably impact the speed of\n> > vacuum (since it waits more often) and concurrency (since we lock more\n> > buffers than before, even if they are actively used).\n> \n> Well, that behavior sounds like something we should maybe fix,\n> regardless of whether we're lowering the default vfma or not.\n\nWell, that's easier said than done ;)\n\nI wonder if we couldn't do the actual freezeing - not the dead tuple\ndeletion - without a cleanup but just with an exclusive lock?\n\nI think I have said that before, but anyway: I think as long as we need\nto regularly walk the whole relation for correctness there isn't much\nhope to get this into an acceptable state. If we would track the oldest\nxid in a page in a 'freeze map' we could make much of this more\nefficient and way more scalable to bigger data volumes.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 12 May 2013 14:50:27 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting vacuum_freeze_min_age really low"
},
{
"msg_contents": "On Sun, May 12, 2013 at 8:50 AM, Andres Freund <[email protected]> wrote:\n> [ a response that I entirely agree with ]\n\n+1 to all that.\n\nIt's maybe worth noting that it's probably fairly uncommon for vacuum\nto read a page and not dirty it, because if the page is all-visible,\nwe won't read it. And if it's not all-visible, and there's nothing\nelse interesting to do with it, we'll probably make it all-visible,\nwhich will dirty it. It can happen, if for example we vacuum a page\nwith no dead tuples while the inserting transaction is still running,\nor committed but not yet all-visible. Of course, in those cases we\nwon't be able to freeze, either.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 May 2013 13:21:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting vacuum_freeze_min_age really low"
},
{
"msg_contents": "On 2013-05-13 13:21:54 -0400, Robert Haas wrote:\n> On Sun, May 12, 2013 at 8:50 AM, Andres Freund <[email protected]> wrote:\n> > [ a response that I entirely agree with ]\n> \n> +1 to all that.\n\n> It's maybe worth noting that it's probably fairly uncommon for vacuum\n> to read a page and not dirty it, because if the page is all-visible,\n> we won't read it.\n\nBut only if 50(?)+ pages are marked all-visible in one go, otherwise we\nafair won't skip afair. And we don't skip them at all during full table\nvacuums.\n\n> And if it's not all-visible, and there's nothing\n> else interesting to do with it, we'll probably make it all-visible,\n> which will dirty it. It can happen, if for example we vacuum a page\n> with no dead tuples while the inserting transaction is still running,\n> or committed but not yet all-visible. Of course, in those cases we\n> won't be able to freeze, either.\n\nIIRC the actual values below which we freeze are always computed\nrelative to GetOldestXmin() (and have to, otherwise rows will suddently\nappear visible). In many, many environment thats lagging behind quite a\nbit. Longrunning user transactions, pg_dump, hot_standby_feedback,\nvacuum_defer_cleanup_age...\n\nAlso, even if the *whole* page isn't all visible because e.g. there just\nwas another row inserted we still freeze individual rows.\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 May 2013 19:43:22 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting vacuum_freeze_min_age really low"
},
{
"msg_contents": "HAndres,\n\n> Well. For one you haven't proven that the changed setting actually\n> improves performance. So the comparison isn't really valid. We will\n\nI agree that I haven't proven this yet, but that doesn't make it\ninvalid. Just unproven.\n\nI agree that performance testing is necessary ... and the kind of\nperformance testing which generated freeze activity, which makes it harder.\n\n> I think you're missing the fact that we don't neccessarily dirty pages,\n> just because vacuum visits them. In a mostly insert workload its not\n> uncommon that vacuum doesn't change anything. In many scenarios the\n\nHmmm. But does vacuum visit the pages anyway, in that case?\n\n> b) freezing tuples requires a xlog_heap_freeze wal record to be\n> emitted. If we don't freeze, we don't need to emit it.\n\nOh, that's annoying.\n\n> I think I have said that before, but anyway: I think as long as we need\n> to regularly walk the whole relation for correctness there isn't much\n> hope to get this into an acceptable state. If we would track the oldest\n> xid in a page in a 'freeze map' we could make much of this more\n> efficient and way more scalable to bigger data volumes.\n\nYeah, or come up with some way to eliminate freezing entirely.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 13 May 2013 21:02:04 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting vacuum_freeze_min_age really low"
}
] |
[
{
"msg_contents": "Hi all,\n\nJust to let the community know that probably Linux kernel 3.10\nwill include some work to make sysv semaphore code more scalable.\nSince Postgres use this kernel features I've thought to post it\nhere just for the sake of sharing a good news:\n\nThe thread where Rik van Riel posts the v3 patch series:\n\nhttp://thread.gmane.org/gmane.linux.kernel/1460761\n\nHere http://thread.gmane.org/gmane.linux.kernel/1460761/focus=1461794\nDavidlohr Bueso (patches co-author) post an interesting results\ncomparing multiple runs of \"[his] Oracle Swingbench DSS\" workload.\nReported here for brevity:\n\nTPS:\n100 users: 1257.21 (vanilla) 2805.06 (v3 patchset)\n400 users: 1437.57 (vanilla) 2664.67 (v3 patchset)\n800 users: 1236.89 (vanilla) 2750.73 (v3 patchset)\n\nipc lock contention:\n100 users: 8,74% (vanilla) 3.17% (v3 patchset)\n400 users: 21,86% (vanilla) 5.23% (v3 patchset)\n800 users 84,35% (vanilla) 7.39% (v3 patchset)\n\nas soon as I've a spare server to play with and a little\nof time I will give it a spin.\n\nAndrea\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Mar 2013 14:04:41 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": true,
"msg_subject": "[OT] linux 3.10 kernel will improve ipc,sysv semaphore scalability"
},
{
"msg_contents": "On 03/26/2013 08:04 AM, Andrea Suisani wrote:\n\n> TPS:\n> 100 users: 1257.21 (vanilla) 2805.06 (v3 patchset)\n> 400 users: 1437.57 (vanilla) 2664.67 (v3 patchset)\n> 800 users: 1236.89 (vanilla) 2750.73 (v3 patchset)\n\nWow, I like the look of that. I wonder how this reacts with disabled \nautogrouping and increasing sched_migration_cost. If the completely fair \nscheduler has less locking contention with this patch-set, those tweaks \nmay not even be necessary. I need to see if I can find a system to test on.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Mar 2013 13:59:50 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT] linux 3.10 kernel will improve ipc,sysv semaphore\n scalability"
},
{
"msg_contents": "Hi\n\nOn 03/26/2013 07:59 PM, Shaun Thomas wrote:\n> On 03/26/2013 08:04 AM, Andrea Suisani wrote:\n>\n>> TPS:\n>> 100 users: 1257.21 (vanilla) 2805.06 (v3 patchset)\n>> 400 users: 1437.57 (vanilla) 2664.67 (v3 patchset)\n>> 800 users: 1236.89 (vanilla) 2750.73 (v3 patchset)\n>\n> Wow, I like the look of that.\n\nMe too.\n\n> I wonder how this reacts with disabled autogrouping and increasing\n> sched_migration_cost. If the completely fair scheduler has less locking\n > contention with this patch-set, those tweaks may not even be necessary.\n > I need to see if I can find a system to test on.\n\nYep, I'm eager to test it too, but unfortunately I have\nno spare server to use. Probably early next I'll test it\non my workstation.\n\nAndrea\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 29 Mar 2013 15:52:55 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [OT] linux 3.10 kernel will improve ipc,sysv semaphore\n scalability"
},
{
"msg_contents": "Hi all,\n\nOn 03/26/2013 07:59 PM, Shaun Thomas wrote:\n> On 03/26/2013 08:04 AM, Andrea Suisani wrote:\n>\n>> TPS:\n>> 100 users: 1257.21 (vanilla) 2805.06 (v3 patchset)\n>> 400 users: 1437.57 (vanilla) 2664.67 (v3 patchset)\n>> 800 users: 1236.89 (vanilla) 2750.73 (v3 patchset)\n>\n> Wow, I like the look of that. I wonder how this reacts with disabled autogrouping and increasing sched_migration_cost. If the completely fair scheduler has less locking contention with this patch-set, those tweaks may not even be necessary. I need to see if I can find a system to test on.\n>\n\n\nstill no time to do proper testing but I just want to inform that the\npatchset actually went in the 3.10 first release candidate, as we can see\nfrom git log :)\n\n\ngit log v3.9..v3.10-rc1 --grep=\"ipc,sem: fine grained locking for semtimedop\" -- ipc/sem.c\n\ncommit 6062a8dc0517bce23e3c2f7d2fea5e22411269a3\nAuthor: Rik van Riel <[email protected]>\nDate: Tue Apr 30 19:15:44 2013 -0700\n\nipc,sem: fine grained locking for semtimedop\n\n...\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 16 May 2013 10:47:07 +0200",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [OT] linux 3.10 kernel will improve ipc,sysv semaphore\n scalability"
}
] |
[
{
"msg_contents": "Hi all,\n\nCurrently, I'm working on a performance issue with a query which takes\naround ~30 sec to complete. This happens only, when there are more\nactivity going around same table. The same query completes with in a\nsecond when there are no activity on that table.\n\nTried taking EXPLAIN ANALYZE output and analyze it, since it completes\nwith in a second, there are no blockers in it.\n\nIs there any way to see whether postgress settings creates the\nproblem? which could be related to memory, locks or I/O etc..\n\nThanks for your help.\n--Prakash\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 27 Mar 2013 19:34:53 +0530",
"msg_from": "Prakash Chinnakannan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reg: Slow query"
},
{
"msg_contents": "Hello,\nThis definitively doesn't look like something that has to do with postgres\nsettings, can you show us the statement and the explain plan ?\nAlso , have you checked pg_stat_activity to monitor what is running at the\nsame time when the delay occurs ?\nIt kinda looks like a lock to me.\n\nVasilis Ventirozos\n\n\nOn Wed, Mar 27, 2013 at 4:04 PM, Prakash Chinnakannan <\[email protected]> wrote:\n\n> Hi all,\n>\n> Currently, I'm working on a performance issue with a query which takes\n> around ~30 sec to complete. This happens only, when there are more\n> activity going around same table. The same query completes with in a\n> second when there are no activity on that table.\n>\n> Tried taking EXPLAIN ANALYZE output and analyze it, since it completes\n> with in a second, there are no blockers in it.\n>\n> Is there any way to see whether postgress settings creates the\n> problem? which could be related to memory, locks or I/O etc..\n>\n> Thanks for your help.\n> --Prakash\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHello, This definitively doesn't look like something that has to do with postgres settings, can you show us the statement and the explain plan ?Also , have you checked pg_stat_activity to monitor what is running at the same time when the delay occurs ?\nIt kinda looks like a lock to me.Vasilis VentirozosOn Wed, Mar 27, 2013 at 4:04 PM, Prakash Chinnakannan <[email protected]> wrote:\nHi all,\n\nCurrently, I'm working on a performance issue with a query which takes\naround ~30 sec to complete. This happens only, when there are more\nactivity going around same table. The same query completes with in a\nsecond when there are no activity on that table.\n\nTried taking EXPLAIN ANALYZE output and analyze it, since it completes\nwith in a second, there are no blockers in it.\n\nIs there any way to see whether postgress settings creates the\nproblem? which could be related to memory, locks or I/O etc..\n\nThanks for your help.\n--Prakash\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 27 Mar 2013 16:15:30 +0200",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reg: Slow query"
}
] |
[
{
"msg_contents": "Greetings,\n\nWe've been using postgreSQL for a few years. This is my first post here\nand first real dive into query plans.\n\n\nA description of what you are trying to achieve and what results you expect.:\n Query results of nested joins of table. Results are correct - just\ntakes a long time with selected plan.\n\nPostgreSQL version number you are running:\n PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit\n\nHow you installed PostgreSQL:\n\n yum, using PGDG repo (package\npostgresql92-server-9.2.3-2PGDG.rhel6.x86_64 and friends)\n\n\nChanges made to the settings in the postgresql.conf file\n DateStyle = ISO, MDY\n default_tablespace = esc_data\n default_text_search_config = pg_catalog.english\n effective_cache_size = 24GB\n lc_messages = en_US.UTF-8\n lc_monetary = en_US.UTF-8\n lc_numeric = en_US.UTF-8\n lc_time = en_US.UTF-8\n listen_addresses = 0.0.0.0\n log_connections = on\n log_destination = stderr\n log_disconnections = on\n log_line_prefix = %t %c\n log_rotation_age = 1d\n log_timezone = US/Eastern\n logging_collector = on\n maintenance_work_mem = 96MB\n max_connections = 100\n search_path = \"$user\", esc_funcs, public\n shared_buffers = 8GB\n TimeZone = US/Eastern\n track_functions = all\n track_io_timing = on\n\nOperating system and version:\nRed Hat Enterprise Linux Server release 6.4 (Santiago)\n\nWhat program you're using to connect to PostgreSQL:\n\njava(jdbc) and psql\n\n\nIs there anything relevant or unusual in the PostgreSQL server logs?:\n\nno\n\nThe issue is similar on PostgreSQL 9.0.5 on x86_64-pc-linux-gnu, compiled\nby GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3 on Ubuntu 10.04 64-bit\nalthough we're doing troubleshooting on our new RHEL server.\n\nWe have a particular query that takes about 75 minutes to complete. The\nselected execution plan estimates 1 row from several of the outermost\nresults so picks nested loop join resolutions. That turns out to be a bad\nchoice since actual row counts are in the thirty to fifty thousand range.\noriginal selected plan: http://explain.depesz.com/s/muR\nSQL: http://pastebin.com/f40Xp0JM\n\nI set enable_nestloop=false to hint at the planner not to use nested loop.\nThat resulted in 13 second runtime. It appears this plan was considered\noriginally but estimated cost was higher than the plan above.\nenable_nestloop=false: http://explain.depesz.com/s/mAa\nSQL: http://pastebin.com/CgcSe7r6\n\nWe tried rewriting the query using WITH clauses. That took 82 seconds but\nplan thought it would take much longer.\nusing with clauses: http://explain.depesz.com/s/GEZ\nSQL: http://pastebin.com/ZRvRK2TV\n\nWe have been looking into the issue to the best of our ability but can't\nfigure out how to help the planner. I've looked at the planner source some\nand see where row count is set to 1 if it's <= 1. I haven't found where\nit's set yet but presume it was unable to determine the result set row\ncount and defaulted to 1.\n\nI've run analyze manually and tried it with default_statistics_target=10000\nto see if that helped. It didn't.\nThe table is static - no new rows are being added and there is no other\nload on the database.\n\nschema dump: http://pastebin.com/pUU0BJvr\n\nWhat can we do to help the planner estimate better?\n\nThanks in advance,\nMarty Frasier\n\nGreetings,We've been using postgreSQL for a few years. This is my first post here and first real dive into query plans.A description of what you are trying to achieve and what results you expect.:\n Query results of nested joins of table. Results are correct - just takes a long time with selected plan.\nPostgreSQL version number you are running:\n PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bitHow you installed PostgreSQL: yum, using PGDG repo (package postgresql92-server-9.2.3-2PGDG.rhel6.x86_64 and friends)\nChanges made to the settings in the postgresql.conf file DateStyle = ISO, MDY default_tablespace = esc_data default_text_search_config = pg_catalog.english effective_cache_size = 24GB lc_messages = en_US.UTF-8\n lc_monetary = en_US.UTF-8 lc_numeric = en_US.UTF-8 lc_time = en_US.UTF-8 listen_addresses = 0.0.0.0 log_connections = on log_destination = stderr log_disconnections = on log_line_prefix = %t %c \n log_rotation_age = 1d log_timezone = US/Eastern logging_collector = on maintenance_work_mem = 96MB max_connections = 100 search_path = \"$user\", esc_funcs, public shared_buffers = 8GB\n TimeZone = US/Eastern track_functions = all track_io_timing = onOperating system and version:Red Hat Enterprise Linux Server release 6.4 (Santiago)What program you're using to connect to PostgreSQL:\njava(jdbc) and psqlIs there anything relevant or unusual in the PostgreSQL server logs?:noThe issue is similar on PostgreSQL 9.0.5 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3 on Ubuntu 10.04 64-bit although we're doing troubleshooting on our new RHEL server.\nWe have a particular query that takes about 75 minutes to complete. The selected execution plan estimates 1 row from several of the outermost results so picks nested loop join resolutions. That turns out to be a bad choice since actual row counts are in the thirty to fifty thousand range.\noriginal selected plan: http://explain.depesz.com/s/muRSQL: http://pastebin.com/f40Xp0JM\nI set enable_nestloop=false to hint at the planner not to use nested loop. That resulted in 13 second runtime. It appears this plan was considered originally but estimated cost was higher than the plan above.\n\nenable_nestloop=false: http://explain.depesz.com/s/mAaSQL: http://pastebin.com/CgcSe7r6\nWe tried rewriting the query using WITH clauses. That took 82 seconds but plan thought it would take much longer.\nusing with clauses: http://explain.depesz.com/s/GEZSQL: http://pastebin.com/ZRvRK2TV\nWe have been looking into the issue to the best of our ability but can't figure out how to help the planner. I've looked at the planner source some and see where row count is set to 1 if it's <= 1. I haven't found where it's set yet but presume it was unable to determine the result set row count and defaulted to 1.\nI've run analyze manually and tried it with default_statistics_target=10000 to see if that helped. It didn't.The table is static - no new rows are being added and there is no other load on the database.\nschema dump: http://pastebin.com/pUU0BJvrWhat can we do to help the planner estimate better?Thanks in advance,Marty Frasier",
"msg_date": "Thu, 28 Mar 2013 11:59:05 -0400",
"msg_from": "Marty Frasier <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to help the planner"
},
{
"msg_contents": "Marty Frasier <[email protected]> writes:\n> We've been using postgreSQL for a few years. This is my first post here\n> and first real dive into query plans.\n\nOne quick thought is that it's probably worth cranking up\njoin_collapse_limit and/or from_collapse_limit, since the number of\nrelations in the query is considerably more than the default values of\nthose limits. This will make planning take longer but possibly find\nbetter plans. I'm not sure it will help a lot, since most of the\nproblem is evidently bad rowcount estimates, but it might help.\n\nAlso it seems like the major rowcount failing is in the estimate for the\nt12 subquery. I can't tell why that particular combination of WHERE\nclauses is giving it such a hard time --- is there something odd about\nthe distribution of 'cahsee_ela' tests? Why is that particular subquery\ngrouped over school/student when all the others are grouped over just\nstudent?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Mar 2013 12:18:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to help the planner"
},
{
"msg_contents": "Tom,\n\nI cranked (join|from)_collapse_limit up to 50, then 500 just to exclude the\nlimits completey, and attempted the query both times. The planner came up\nwith an estimate close to the other estimates (1,944,276) and I stopped\nactual execution after some length of time.\n\nThe t12 subquery is grouped differently because that particular test can be\nvalid at mutliple schools per student.\n\nI had set session pg_default_statistics to 10000 and analyzed prior to the\nearlier runs to allow it to have the best stats it could. I've looked at\nit a little more closely, setting pg_default_statistics back to default of\n100 and re-ran analyze on that database.\n\nThe value 'cahsee_ela' occurs 75,000 times in column\nanalysis.iteration__student__test__year.test which totals 11M rows. It's\nranked about 60 of 91 values in frequency.\nBy setting statistics=1000 on the column 'test' the MCV from pg_stats\ncontains all 91 distinct values (there are no nulls) and there is no\nhistogram_bounds value for the column. From MCV: cahsee_ela = 0.00658\nwhich is accurate.\nI think that should give the planner good info on the selectivity of the\nwhere clause. It appears from the var_eq_const function that it will use\nthat exact value when found. It doesn' t seem to help the outcome though\nas it had good stats before. I just understand it a little better now -\nwhich is good.\n\nDo you have any suggestions where to probe next?\nI see some statistics hooks mentioned in some of the source codes but don't\nknow how to take advantage of them or whether it would be of use.\nI suppose the answer could eventually be we have to reorganize our queries?\n\nThanks,\nMarty\n\n\n\nOn Thu, Mar 28, 2013 at 12:18 PM, Tom Lane <[email protected]> wrote:\n\n> Marty Frasier <[email protected]> writes:\n> > We've been using postgreSQL for a few years. This is my first post here\n> > and first real dive into query plans.\n>\n> One quick thought is that it's probably worth cranking up\n> join_collapse_limit and/or from_collapse_limit, since the number of\n> relations in the query is considerably more than the default values of\n> those limits. This will make planning take longer but possibly find\n> better plans. I'm not sure it will help a lot, since most of the\n> problem is evidently bad rowcount estimates, but it might help.\n>\n> Also it seems like the major rowcount failing is in the estimate for the\n> t12 subquery. I can't tell why that particular combination of WHERE\n> clauses is giving it such a hard time --- is there something odd about\n> the distribution of 'cahsee_ela' tests? Why is that particular subquery\n> grouped over school/student when all the others are grouped over just\n> student?\n>\n> regards, tom lane\n>\n\nTom,I cranked (join|from)_collapse_limit up to 50, then 500 just to exclude the limits completey, and attempted the query both times. The planner came up with an estimate close to the other estimates (1,944,276) and I stopped actual execution after some length of time.\nThe t12 subquery is grouped differently because that particular test can be valid at mutliple schools per student.I had set session pg_default_statistics to 10000 and analyzed prior to the earlier runs to allow \nit to have the best stats it could. I've looked at it a little more \nclosely, setting pg_default_statistics back to default of 100 and re-ran analyze on that database.The value 'cahsee_ela' occurs 75,000 times in \ncolumn analysis.iteration__student__test__year.test which totals 11M rows. \nIt's ranked about 60 of 91 values in frequency.By setting statistics=1000 on the column 'test' the MCV from pg_stats contains all 91 distinct values (there are no nulls) and there is no histogram_bounds value for the column. From MCV: cahsee_ela = 0.00658 which is accurate.\nI think that should give the planner good info on the selectivity of the where clause. It appears from the var_eq_const function that it will use that exact value when found. It doesn' t seem to help the outcome though as it had good stats before. I just understand it a little better now - which is good.\nDo you have any suggestions where to probe next?I see some statistics hooks mentioned in some of the source codes but don't know how to take advantage of them or whether it would be of use.\nI suppose the answer could eventually be we have to reorganize our queries? Thanks,Marty\nOn Thu, Mar 28, 2013 at 12:18 PM, Tom Lane <[email protected]> wrote:\nMarty Frasier <[email protected]> writes:\n> We've been using postgreSQL for a few years. This is my first post here\n> and first real dive into query plans.\n\nOne quick thought is that it's probably worth cranking up\njoin_collapse_limit and/or from_collapse_limit, since the number of\nrelations in the query is considerably more than the default values of\nthose limits. This will make planning take longer but possibly find\nbetter plans. I'm not sure it will help a lot, since most of the\nproblem is evidently bad rowcount estimates, but it might help.\n\nAlso it seems like the major rowcount failing is in the estimate for the\nt12 subquery. I can't tell why that particular combination of WHERE\nclauses is giving it such a hard time --- is there something odd about\nthe distribution of 'cahsee_ela' tests? Why is that particular subquery\ngrouped over school/student when all the others are grouped over just\nstudent?\n\n regards, tom lane",
"msg_date": "Thu, 28 Mar 2013 16:45:26 -0400",
"msg_from": "Marty Frasier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to help the planner"
},
{
"msg_contents": "Marty,\n\n* Marty Frasier ([email protected]) wrote:\n> We have a particular query that takes about 75 minutes to complete. The\n> selected execution plan estimates 1 row from several of the outermost\n> results so picks nested loop join resolutions. That turns out to be a bad\n> choice since actual row counts are in the thirty to fifty thousand range.\n\nI've seen exactly this behaviour and it's led to many cases where we've\nhad to simply disable nest loop for a given query. They're usually in\nfunctions, so that turns out to be workable without having to deal with\napplication changes. Still, it totally sucks.\n\n> I haven't found where\n> it's set yet but presume it was unable to determine the result set row\n> count and defaulted to 1.\n\nNo.. There's no 'default to 1', afaik. The problem seems to simply be\nthat PG ends up estimating the number of rows coming back very poorly.\nI'm actually suspicious that the number it's coming up with is much\n*smaller* than one and then clamping it back to '1' as a minimum instead\nof rounding it down to zero. I did see one query that moved to a nested\nloop query plan from a more sensible plan when upgrading from 9.0 to\n9.2, but there were plans even under 9.0 that were similairly bad.\n\nThe one thing I've not had a chance to do yet is actually build out a\ntest case which I can share which demonstrates this bad behaviour. If\nthat's something which you could provide, it would absolutely help us in\nunderstanding and perhaps solving this issue.\n\n\tThanks!\n\n\t\tStephen",
"msg_date": "Thu, 28 Mar 2013 18:13:11 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to help the planner"
},
{
"msg_contents": "Marty,\n\nWhen you change from/join collaps_limit pay attention to Genetic Query Optimizer settings, I believe by default it's \"on\" (geqo = on).\nSpecifically look at geqo_threshold parameter (default is 12). \nAFAIK, if you don't have intensions to use Genetic Query Optimizer, geqo_threshold parameter should be higher than your collaps_limit, e.g. if you want to set collaps_limit to 50, and you think you may join 50 tables, then also increase geqo_threshold to at least 51.\nOtherwise GeCO will come into play unexpectedly.\n\nBesides this, try to play with these parameters (according to your original message you keep them at default):\n\n#seq_page_cost = 1.0\t\t\t# measured on an arbitrary scale\nrandom_page_cost = 2.0\t\t\t# same scale as above (default 4.0)\ncpu_tuple_cost = 0.05\t\t\t# same scale as above (default 0.01)\ncpu_index_tuple_cost = 0.05\t\t# same scale as above (default 0.005)\ncpu_operator_cost = 0.0075\t\t# same scale as above (default 0.0025)\n\nStart with cpu_tuple_cost, increasing it from default 0.01 to 0.03-0.05.\n\nRegards,\nIgor Neyman\n\n\nFrom: Marty Frasier [mailto:[email protected]] \nSent: Thursday, March 28, 2013 4:45 PM\nTo: Tom Lane\nCc: [email protected]; James Quinn\nSubject: Re: how to help the planner\n\nTom,\nI cranked (join|from)_collapse_limit up to 50, then 500 just to exclude the limits completey, and attempted the query both times. The planner came up with an estimate close to the other estimates (1,944,276) and I stopped actual execution after some length of time.\nThe t12 subquery is grouped differently because that particular test can be valid at mutliple schools per student.\n\nI had set session pg_default_statistics to 10000 and analyzed prior to the earlier runs to allow it to have the best stats it could. I've looked at it a little more closely, setting pg_default_statistics back to default of 100 and re-ran analyze on that database.\n\nThe value 'cahsee_ela' occurs 75,000 times in column analysis.iteration__student__test__year.test which totals 11M rows. It's ranked about 60 of 91 values in frequency.\nBy setting statistics=1000 on the column 'test' the MCV from pg_stats contains all 91 distinct values (there are no nulls) and there is no histogram_bounds value for the column. From MCV: cahsee_ela = 0.00658 which is accurate.\nI think that should give the planner good info on the selectivity of the where clause. It appears from the var_eq_const function that it will use that exact value when found. It doesn' t seem to help the outcome though as it had good stats before. I just understand it a little better now - which is good.\n\nDo you have any suggestions where to probe next?\nI see some statistics hooks mentioned in some of the source codes but don't know how to take advantage of them or whether it would be of use.\nI suppose the answer could eventually be we have to reorganize our queries?\n \nThanks,\nMarty\n\n\nOn Thu, Mar 28, 2013 at 12:18 PM, Tom Lane <[email protected]> wrote:\nMarty Frasier <[email protected]> writes:\n> We've been using postgreSQL for a few years. This is my first post here\n> and first real dive into query plans.\nOne quick thought is that it's probably worth cranking up\njoin_collapse_limit and/or from_collapse_limit, since the number of\nrelations in the query is considerably more than the default values of\nthose limits. This will make planning take longer but possibly find\nbetter plans. I'm not sure it will help a lot, since most of the\nproblem is evidently bad rowcount estimates, but it might help.\n\nAlso it seems like the major rowcount failing is in the estimate for the\nt12 subquery. I can't tell why that particular combination of WHERE\nclauses is giving it such a hard time --- is there something odd about\nthe distribution of 'cahsee_ela' tests? Why is that particular subquery\ngrouped over school/student when all the others are grouped over just\nstudent?\n\n regards, tom lane\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 29 Mar 2013 18:23:06 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to help the planner"
}
] |
[
{
"msg_contents": "Hi all,\n\n When I use postgres and issue a simple sequential scan for a table\ninventory using query \"select * from inventory;\", I can see from \"top\" that\npostmaster is using 100% CPU, which limits the query execution time. My\nquestion is that, why CPU is the bottleneck here and what is postmaster\ndoing? Is there any way to improve the performance? Thanks!\n\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n15401 postgres 20 0 4371m 517m 515m R 99.8 3.2 0:30.14 postmaster\n\nQuery: select * from inventory;\n\nexplain analyze select * from inventory;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------\n------------\n Seq Scan on inventory (cost=0.00..180937.00 rows=11745000 width=16)\n(actual time=0.005..1030.403 rows=117450\n00 loops=1)\n Total runtime: 1750.889 ms\n(2 rows)\n\nTable \"public.inventory\"\n Column | Type | Modifiers\n----------------------+---------+-----------\n inv_date_sk | integer | not null\n inv_item_sk | integer | not null\n inv_warehouse_sk | integer | not null\n inv_quantity_on_hand | integer |\nIndexes:\n \"inventory_pkey\" PRIMARY KEY, btree (inv_date_sk, inv_item_sk,\ninv_warehouse_sk)\n\nHi all, When I use postgres and issue a simple sequential scan for a table inventory using query \"select * from inventory;\", I can see from \"top\" that postmaster is using 100% CPU, which limits the query execution time. My question is that, why CPU is the bottleneck here and what is postmaster doing? Is there any way to improve the performance? Thanks!\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND15401 postgres 20 0 4371m 517m 515m R 99.8 3.2 0:30.14 postmasterQuery: select * from inventory;explain analyze select * from inventory;\n QUERY PLAN-------------------------------------------------------------------------------------------------------------------------- Seq Scan on inventory (cost=0.00..180937.00 rows=11745000 width=16) (actual time=0.005..1030.403 rows=117450\n00 loops=1) Total runtime: 1750.889 ms(2 rows)Table \"public.inventory\" Column | Type | Modifiers----------------------+---------+----------- inv_date_sk | integer | not null\n inv_item_sk | integer | not null inv_warehouse_sk | integer | not null inv_quantity_on_hand | integer |Indexes: \"inventory_pkey\" PRIMARY KEY, btree (inv_date_sk, inv_item_sk, inv_warehouse_sk)",
"msg_date": "Thu, 28 Mar 2013 12:07:01 -0700",
"msg_from": "kelphet xiong <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about postmaster's CPU usage"
},
{
"msg_contents": "kelphet xiong <[email protected]> wrote:\n\n> When I use postgres and issue a simple sequential scan for a\n> table inventory using query \"select * from inventory;\", I can see\n> from \"top\" that postmaster is using 100% CPU, which limits the\n> query execution time. My question is that, why CPU is the\n> bottleneck here and what is postmaster doing? Is there any way to\n> improve the performance? Thanks!\n\n> explain analyze select * from inventory;\n> \n> Seq Scan on inventory (cost=0.00..180937.00 rows=11745000 width=16) (actual time=0.005..1030.403 rows=11745000 loops=1)\n> Total runtime: 1750.889 ms\n\nSo it is reading and returning 11.7 million rows in about 1 second,\nor about 88 nanoseconds (billionths of a second) per row. You\ncan't be waiting for a hard drive for many of those reads, or it\nwould take a lot longer, so the bottleneck is the CPU pushing the\ndata around in RAM. I'm not sure why 100% CPU usage would surprise\nyou. Are you wondering why the CPU works on the query straight\nthrough until it is done, rather than taking a break periodically\nand letting the unfinished work sit there?\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Mar 2013 14:03:42 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about postmaster's CPU usage"
},
{
"msg_contents": "On Thu, Mar 28, 2013 at 02:03:42PM -0700, Kevin Grittner wrote:\n> kelphet xiong <[email protected]> wrote:\n> \n> > When I use postgres and issue a simple sequential scan for a\n> > table inventory using query \"select * from inventory;\", I can see\n> > from \"top\" that postmaster is using 100% CPU, which limits the\n> > query execution time. My question is that, why CPU is the\n> > bottleneck here and what is postmaster doing? Is there any way to\n> > improve the performance? Thanks!\n> \n> > explain analyze select * from inventory;\n> > \n> > Seq Scan on inventory (cost=0.00..180937.00 rows=11745000 width=16) (actual time=0.005..1030.403 rows=11745000 loops=1)\n> > Total runtime: 1750.889 ms\n> \n> So it is reading and returning 11.7 million rows in about 1 second,\n> or about 88 nanoseconds (billionths of a second) per row. You\n> can't be waiting for a hard drive for many of those reads, or it\n> would take a lot longer, so the bottleneck is the CPU pushing the\n> data around in RAM. I'm not sure why 100% CPU usage would surprise\n> you. Are you wondering why the CPU works on the query straight\n> through until it is done, rather than taking a break periodically\n> and letting the unfinished work sit there?\n> \n> --\n> Kevin Grittner\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n> \n\nAlternatively, purchase a faster CPU if CPU is the bottleneck as it\nis in this case or partition the work into parallel queuries that can\neach use a processor.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 28 Mar 2013 16:20:59 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about postmaster's CPU usage"
},
{
"msg_contents": "On Mar 28, 2013 9:07 PM, \"kelphet xiong\" <[email protected]> wrote:\n> explain analyze select * from inventory;\n> QUERY PLAN\n>\n>\n--------------------------------------------------------------------------------------------------------------\n> ------------\n> Seq Scan on inventory (cost=0.00..180937.00 rows=11745000 width=16)\n(actual time=0.005..1030.403 rows=117450\n> 00 loops=1)\n> Total runtime: 1750.889 ms\n> (2 rows)\n\nA large fraction of that time, if not most is due to timing overhead. You\ncan try the same query without timing by using explain (analyze on, timing\noff) select * from inventory;\n\nRegards,\nAnts Aasma\n\nOn Mar 28, 2013 9:07 PM, \"kelphet xiong\" <[email protected]> wrote:\n> explain analyze select * from inventory;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------\n> ------------\n> Seq Scan on inventory (cost=0.00..180937.00 rows=11745000 width=16) (actual time=0.005..1030.403 rows=117450\n> 00 loops=1)\n> Total runtime: 1750.889 ms\n> (2 rows)\nA large fraction of that time, if not most is due to timing overhead. You can try the same query without timing by using explain (analyze on, timing off) select * from inventory;\nRegards,\nAnts Aasma",
"msg_date": "Sun, 31 Mar 2013 01:45:31 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about postmaster's CPU usage"
},
{
"msg_contents": "Thanks a lot for replies from Kevin, Ken, and Ants Aasma. I really\naappreciate your suggestions and comments.\n\n\n\nMy server configuration is two physical quad-core CPUs with\nhyper-threading enabled.\nEach CPU is Intel(R) Xeon(R) CPU [email protected]. Physical memory is 16GB.\nI set shared_buffers as 4GB, effective_cache_size as 10GB and\ninventory table is around 500MB.\n\n\n\n From the information provided by top command, although the row for\npostmaster shows that postmaster is using 100%CPU,\nthe total CPU user time for the whole server never goes beyond 6.6%us.\nI guess it is because postgres only uses a single thread to read\nthe data or “pushing the data around in RAM” according to Kevin’s statement.\nThen my question is actually why postgres can not use the remaining 93.4%CPU.\n\n\n\nBtw, I also tried the command suggested by Ants Aasma, but got an error:\nexplain (analyze on, timing off) select * from inventory;\nERROR: syntax error at or near \"analyze\"\nLINE 1: explain (analyze on, timing off) select * from inventory;\n\n ^\n\nThanks!\n\nBest regards\nKelphet Xiong\n\nOn Thu, Mar 28, 2013 at 2:03 PM, Kevin Grittner <[email protected]> wrote:\n\n> kelphet xiong <[email protected]> wrote:\n>\n> > When I use postgres and issue a simple sequential scan for a\n> > table inventory using query \"select * from inventory;\", I can see\n> > from \"top\" that postmaster is using 100% CPU, which limits the\n> > query execution time. My question is that, why CPU is the\n> > bottleneck here and what is postmaster doing? Is there any way to\n> > improve the performance? Thanks!\n>\n> > explain analyze select * from inventory;\n> >\n> > Seq Scan on inventory (cost=0.00..180937.00 rows=11745000 width=16)\n> (actual time=0.005..1030.403 rows=11745000 loops=1)\n> > Total runtime: 1750.889 ms\n>\n> So it is reading and returning 11.7 million rows in about 1 second,\n> or about 88 nanoseconds (billionths of a second) per row. You\n> can't be waiting for a hard drive for many of those reads, or it\n> would take a lot longer, so the bottleneck is the CPU pushing the\n> data around in RAM. I'm not sure why 100% CPU usage would surprise\n> you. Are you wondering why the CPU works on the query straight\n> through until it is done, rather than taking a break periodically\n> and letting the unfinished work sit there?\n>\n> --\n> Kevin Grittner\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThanks a lot for replies from Kevin, Ken, and Ants Aasma. I really aappreciate your suggestions and comments. \nMy server configuration is two physical quad-core CPUs with hyper-threading enabled. Each CPU is Intel(R) Xeon(R) CPU [email protected]. Physical memory is 16GB. I set shared_buffers as 4GB, effective_cache_size as 10GB and inventory table is around 500MB.\n From the information provided by top command, although the row for postmaster shows that postmaster is using 100%CPU, the total CPU user time for the whole server never goes beyond 6.6%us. \nI guess it is because postgres only uses a single thread to read the data or “pushing the data around in RAM” according to Kevin’s statement. Then my question is actually why postgres can not use the remaining 93.4%CPU.\n Btw, I also tried the command suggested by Ants Aasma, but got an error:explain (analyze on, timing off) select * from inventory;ERROR: syntax error at or near \"analyze\"\nLINE 1: explain (analyze on, timing off) select * from inventory; ^Thanks!Best regards\nKelphet XiongOn Thu, Mar 28, 2013 at 2:03 PM, Kevin Grittner <[email protected]> wrote:\nkelphet xiong <[email protected]> wrote:\n\n\n> When I use postgres and issue a simple sequential scan for a\n> table inventory using query \"select * from inventory;\", I can see\n> from \"top\" that postmaster is using 100% CPU, which limits the\n> query execution time. My question is that, why CPU is the\n> bottleneck here and what is postmaster doing? Is there any way to\n> improve the performance? Thanks!\n\n> explain analyze select * from inventory;\n>\n> Seq Scan on inventory (cost=0.00..180937.00 rows=11745000 width=16) (actual time=0.005..1030.403 rows=11745000 loops=1)\n> Total runtime: 1750.889 ms\n\nSo it is reading and returning 11.7 million rows in about 1 second,\nor about 88 nanoseconds (billionths of a second) per row. You\ncan't be waiting for a hard drive for many of those reads, or it\nwould take a lot longer, so the bottleneck is the CPU pushing the\ndata around in RAM. I'm not sure why 100% CPU usage would surprise\nyou. Are you wondering why the CPU works on the query straight\nthrough until it is done, rather than taking a break periodically\nand letting the unfinished work sit there?\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 30 Mar 2013 21:00:02 -0700",
"msg_from": "Kelphet Xiong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about postmaster's CPU usage"
},
{
"msg_contents": "On Sat, Mar 30, 2013 at 11:00 PM, Kelphet Xiong <[email protected]> wrote:\n> I guess it is because postgres only uses a single thread to read\n> the data or “pushing the data around in RAM” according to Kevin’s statement.\n> Then my question is actually why postgres can not use the remaining\n> 93.4%CPU.\n\npostgres can use an arbitrary amount of threads to read data, but only\none per database connection.\n\n> Btw, I also tried the command suggested by Ants Aasma, but got an error:\n>\n> explain (analyze on, timing off) select * from inventory;\n> ERROR: syntax error at or near \"analyze\"\n>\n> LINE 1: explain (analyze on, timing off) select * from inventory;\n>\n> ^\n\nAbility to manipulate timing was added in 9.2.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 1 Apr 2013 08:42:42 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about postmaster's CPU usage"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a postgresql database (8.4) running in production whose \nperformance is degrading.\nThere is no single query that underperforms, all queries do.\nAnother interesting point is that a generic performance test \n(https://launchpad.net/tpc-b) gives mediocre peformance when run on the \ndatabase, BUT the same test on a newly created database, on the same pg \ncluster, on the same tablespace, does perform good.\n\nSo the problem seems to be limited to this database, even on newly \ncreated tables...\n\nWhat should I check to find the culprit of this degrading performance ?\n\nFranck",
"msg_date": "Fri, 29 Mar 2013 15:20:42 +0100",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql performance degrading... how to diagnose the root cause"
},
{
"msg_contents": "On 03/29/2013 15:20, Franck Routier wrote:\n> Hi,\n>\n\nHello,\n\n> I have a postgresql database (8.4) running in production whose \n> performance is degrading.\n> There is no single query that underperforms, all queries do.\n> Another interesting point is that a generic performance test \n> (https://launchpad.net/tpc-b) gives mediocre peformance when run on \n> the database, BUT the same test on a newly created database, on the \n> same pg cluster, on the same tablespace, does perform good.\n>\n> So the problem seems to be limited to this database, even on newly \n> created tables...\n>\n> What should I check to find the culprit of this degrading performance ?\n>\n\nDifficult to answer with so few details, but I would start by logging \nslow queries, and run an explain analyze on them (or use auto_explain).\nCheck if you're CPU bound or I/O bound (top, iostats, vmstat, systat, \ngstat..), check your configuration (shared_buffers, \neffective_cache_size, work_mem, checkpoint_segments, cpu_tuple_cost, ...)\n\n> Franck\n>\n\n\n-- \nNo trees were killed in the creation of this message.\nHowever, many electrons were terribly inconvenienced.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 29 Mar 2013 15:42:27 +0100",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql performance degrading... how to diagnose the root\n cause"
},
{
"msg_contents": "Franck Routier <franck.routier 'at' axege.com> writes:\n\n> Hi,\n>\n> I have a postgresql database (8.4) running in production whose\n> performance is degrading.\n> There is no single query that underperforms, all queries do.\n> Another interesting point is that a generic performance test\n> (https://launchpad.net/tpc-b) gives mediocre peformance when run on\n> the database, BUT the same test on a newly created database, on the\n> same pg cluster, on the same tablespace, does perform good.\n>\n> So the problem seems to be limited to this database, even on newly\n> created tables...\n>\n> What should I check to find the culprit of this degrading performance ?\n\nI don't know that tcp-b does but it looks like bloat, provided\nyour comparison with the newly created database is using the same\namount of data in database. You may want to use this loose bloat\nestimate:\n\nhttp://wiki.postgresql.org/wiki/Show_database_bloat\n\nand then use any preferred unbloat mechanism (vacuum full,\ncluster, possibly also reindex), and in the long term better\nconfigure some parameters (activate autovacuum if not already the\ncase, lower autovacuum_vacuum_cost_delay and raise\nautovacuum_vacuum_cost_limit, raise max_fsm_* on your 8.4 or\nupgrade to 9.x).\n\n-- \nGuillaume Cottenceau\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 29 Mar 2013 15:50:26 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql performance degrading... how to diagnose the root\n cause"
},
{
"msg_contents": "Hi,\n> I don't know that tcp-b does\ntpcb.jar is a java implementation of the http://www.tpc.org/tpcb/ \nbenchmark. It is not particularly representative of my workload, but \ngives a synthetic, db-agnostic, view of the system performance.\nWe use it to have quick view to compare differents servers (different \nOS, different RDBMS, etc...).\n\nThat said, the test wil create tables, load them with data, and perform \nsome transactions on them.\nThe point that makes me wonder what happens, is that the test run on my \nmain database is slow, while the same test run on a database on its own \nis quick.\nThis is the same postgresql cluster (same postgresql.conf), same \ntablespace (so same disks), same hardware obviously.\n\nRegarding the server activity, it seems quite flat : iostat shows that \ndisks are not working much (less than 5%), top shows only one active \ncore, and load average is well under 1...\n\n\n>\n> http://wiki.postgresql.org/wiki/Show_database_bloat\nHow do I interpret the output of this query ? Is 1.1 bloat level on a \ntable alarming, or quite ok ?\n\nFranck",
"msg_date": "Fri, 29 Mar 2013 16:31:07 +0100",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql performance degrading... how to diagnose the root\n cause"
},
{
"msg_contents": "On Fri, Mar 29, 2013 at 7:20 AM, Franck Routier <[email protected]>wrote:\n\n> Hi,\n>\n> I have a postgresql database (8.4) running in production whose performance\n> is degrading.\n>\n\nThere have been substantial improvements in performance monitoring in newer\nversions, so using 8.4 limits your options.\n\n\n\n> There is no single query that underperforms, all queries do.\n> Another interesting point is that a generic performance test (\n> https://launchpad.net/tpc-b) gives mediocre peformance when run on the\n> database, BUT the same test on a newly created database, on the same pg\n> cluster, on the same tablespace, does perform good.\n>\n\nIs the server still running its production workload while you do these\ntest, or are you running it on a clone or during off-peak hours? If the\nformer, then if you do your test in a clone which has no load other than\nthe benchmark, do you still see the same thing.\n\nAlso, have you tried running pgbench, which also has a tpc-b-ish workload?\n People on this list will probably be more familiar with that than with the\none you offer. What was the size of the test set (and your RAM) and the\nnumber of concurrent connections it tests?\n\nCheers,\n\nJeff\n\nOn Fri, Mar 29, 2013 at 7:20 AM, Franck Routier <[email protected]> wrote:\nHi,\n\nI have a postgresql database (8.4) running in production whose performance is degrading.There have been substantial improvements in performance monitoring in newer versions, so using 8.4 limits your options.\n \nThere is no single query that underperforms, all queries do.\nAnother interesting point is that a generic performance test (https://launchpad.net/tpc-b) gives mediocre peformance when run on the database, BUT the same test on a newly created database, on the same pg cluster, on the same tablespace, does perform good.\nIs the server still running its production workload while you do these test, or are you running it on a clone or during off-peak hours? If the former, then if you do your test in a clone which has no load other than the benchmark, do you still see the same thing.\nAlso, have you tried running pgbench, which also has a tpc-b-ish workload? People on this list will probably be more familiar with that than with the one you offer. What was the size of the test set (and your RAM) and the number of concurrent connections it tests?\n Cheers,Jeff",
"msg_date": "Fri, 29 Mar 2013 16:23:58 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql performance degrading... how to diagnose the root\n cause"
},
{
"msg_contents": "Franck Routier <franck.routier 'at' axege.com> writes:\n\n>> http://wiki.postgresql.org/wiki/Show_database_bloat\n> How do I interpret the output of this query ? Is 1.1 bloat level on a\n> table alarming, or quite ok ?\n\nI am not very used to this, but I'd start by comparing the top\nresult in your established DB against the top result in your\nfresh DB. What does it say? The wiki page says it is a loose\nestimate, however, unusually larger tbloat and/or wastedbytes\nmight be an indication.\n\nOf course, if you can afford it, a good old VACUUM FULL ANALYZE\nVERBOSE would tell you how many pages were reclaimed while\nrewriting the table. Otherwise, VACUUM VERBOSE on both the\nestablished DB and a backup/restore on a fresh DB also provide a\nhelpful comparison of how many pages are used for suspected\ntables.\n\n-- \nGuillaume Cottenceau\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 30 Mar 2013 01:02:55 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql performance degrading... how to diagnose the root\n cause"
},
{
"msg_contents": "> > I don't know that tcp-b does\n> \n> tpcb.jar is a java implementation of the http://www.tpc.org/tpcb/\n> benchmark. It is not particularly representative of my workload, but\n> gives a synthetic, db-agnostic, view of the system performance.\n> We use it to have quick view to compare differents servers (different\n> OS, different RDBMS, etc...).\n\nFor information, pgbench is a sort of limited TPC-B benchmark.\n\n> That said, the test wil create tables, load them with data, and perform\n> some transactions on them.\n> The point that makes me wonder what happens, is that the test run on my\n> main database is slow, while the same test run on a database on its own\n> is quick.\n\nDo you mean when you run it against already existing data vs its own TPC-B DB?\n\n> This is the same postgresql cluster (same postgresql.conf), same\n> tablespace (so same disks), same hardware obviously.\n> \n> Regarding the server activity, it seems quite flat : iostat shows that\n> disks are not working much (less than 5%), top shows only one active\n> core, and load average is well under 1...\n> \n> > http://wiki.postgresql.org/wiki/Show_database_bloat\n> \n> How do I interpret the output of this query ? Is 1.1 bloat level on a\n> table alarming, or quite ok ?\n\nquite ok. The threshold for maintenance task is around 20%.\nI wonder about your system catalogs (pg_type, pg_attribute, ...)\n\nYou can use low level tool provided by PostgreSQL to help figure what's going \nwrong.\npg_buffercache, pg_stattuple come first to explore your cached data and the \nblock content.\n\nOr some weird database configuration ? (parameters in PostgreSQL can be set \nper DB, per role, etc...)\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation",
"msg_date": "Sat, 30 Mar 2013 13:57:10 +0100",
"msg_from": "=?iso-8859-15?q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql performance degrading... how to diagnose the root\n cause"
},
{
"msg_contents": "On Fri, Mar 29, 2013 at 8:31 AM, Franck Routier <[email protected]>wrote:\n\n> Hi,\n>\n> I don't know that tcp-b does\n>>\n> tpcb.jar is a java implementation of the http://www.tpc.org/tpcb/benchmark. It is not particularly representative of my workload, but gives\n> a synthetic, db-agnostic, view of the system performance.\n> We use it to have quick view to compare differents servers (different OS,\n> different RDBMS, etc...).\n>\n\nI took a quick look at that implementation, and I can't make heads nor\ntails of it. It is just a lit of .java files. There is no documentation,\nREADME, instructions, or example usage. Am I missing something? How do I\nrun it, and tell it what scale to use and what database to connect to?\n\n\n\n> That said, the test wil create tables, load them with data, and perform\n> some transactions on them.\n> The point that makes me wonder what happens, is that the test run on my\n> main database is slow, while the same test run on a database on its own is\n> quick.\n> This is the same postgresql cluster (same postgresql.conf), same\n> tablespace (so same disks), same hardware obviously.\n>\n> Regarding the server activity, it seems quite flat : iostat shows that\n> disks are not working much (less than 5%),\n\n\nWhich column of the iostat output is that coming from?\n\n\n> top shows only one active core, and load average is well under 1...\n>\n\nCan you show the first few rows of the top output?\n\nCheers,\n\nJeff\n\nOn Fri, Mar 29, 2013 at 8:31 AM, Franck Routier <[email protected]> wrote:\nHi,\n\nI don't know that tcp-b does\n\ntpcb.jar is a java implementation of the http://www.tpc.org/tpcb/ benchmark. It is not particularly representative of my workload, but gives a synthetic, db-agnostic, view of the system performance.\n\nWe use it to have quick view to compare differents servers (different OS, different RDBMS, etc...).I took a quick look at that implementation, and I can't make heads nor tails of it. It is just a lit of .java files. There is no documentation, README, instructions, or example usage. Am I missing something? How do I run it, and tell it what scale to use and what database to connect to?\n \nThat said, the test wil create tables, load them with data, and perform some transactions on them.\nThe point that makes me wonder what happens, is that the test run on my main database is slow, while the same test run on a database on its own is quick.\nThis is the same postgresql cluster (same postgresql.conf), same tablespace (so same disks), same hardware obviously.\n\nRegarding the server activity, it seems quite flat : iostat shows that disks are not working much (less than 5%), Which column of the iostat output is that coming from? \ntop shows only one active core, and load average is well under 1...Can you show the first few rows of the top output?\nCheers,Jeff",
"msg_date": "Sat, 30 Mar 2013 12:00:58 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql performance degrading... how to diagnose the root\n cause"
},
{
"msg_contents": "Le 29/03/2013 15:20, Franck Routier a écrit :\n> Hi,\n>\n> I have a postgresql database (8.4) running in production whose \n> performance is degrading.\n> There is no single query that underperforms, all queries do.\n> Another interesting point is that a generic performance test \n> (https://launchpad.net/tpc-b) gives mediocre peformance when run on \n> the database, BUT the same test on a newly created database, on the \n> same pg cluster, on the same tablespace, does perform good.\n>\n> So the problem seems to be limited to this database, even on newly \n> created tables...\n>\n> What should I check to find the culprit of this degrading performance ?\n>\n> Franck\n>\nJust for the record, the problem turned out to be a too high \ndefault_statistics_target (4000) that was causing the planner to take up \nto seconds just to evaluate the better plan (probably in eqjoinsel() ).\nSee thread titled \"What happens between end of explain analyze and end \nof query execution ?\" on this list for details.\n\nThanks again to those who responded.\n\nFranck",
"msg_date": "Tue, 16 Apr 2013 11:16:39 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql performance degrading... how to diagnose the root\n cause"
}
] |
[
{
"msg_contents": "[Apologies, I first sent this to the incorrect list, postgres-admin, in the\nevent you receive it twice]\n\nHi there,\n\nI'm hoping someone on the list can shed some light on an issue I'm having\nwith our Postgresql cluster. I'm literally tearing out my hair and don't\nhave a deep enough understanding of Postgres to find the problem.\n\nWhat's happening is I had severe disk/io issues on our original Postgres\ncluster (9.0.8) and switched to a new instance with a RAID-0 volume array.\nThe machine's CPU usage would hover around 30% and our database would run\nlightning fast with pg_locks hovering between 100-200.\n\nWithin a few seconds something would trigger a massive increase in pg_locks\nso that it suddenly shoots up to 4000-8000. At this point everything dies.\nQueries that usually take a few milliseconds takes minutes and everything\nis unresponsive until I restart postgres.\n\nThe instance still idles at this point. The only clue I could find was that\nit usually starts a few minutes after the checkpoint entries appear in my\nlogs.\n\nAny suggestions would really be appreciated. It's killing our business at\nthe moment. I can supply more info if required but pasted what I thought\nwould be useful below. Not sure what else to change in the settings.\n\nKind regards,\n\nArmand\n\n\n\nIt's on Amazon EC2 -\n* cc2.8xlarge instance type\n* 6 volumes in RAID-0 configuration. (1000 PIOPS)\n\n60.5 GiB of memory\n88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)\n3370 GB of instance storage\n64-bit platform\nI/O Performance: Very High (10 Gigabit Ethernet)\nEBS-Optimized Available: No**\nAPI name: cc2.8xlarge\n\n\npostgresql.conf\nfsync = off\nfull_page_writes = off\ndefault_statistics_target = 100\nmaintenance_work_mem = 1GB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 48GB\nwork_mem = 64MB\nwal_buffers = -1\n checkpoint_segments = 128\nshared_buffers = 32GB\nmax_connections = 800\n effective_io_concurrency = 3 # Down from 6\n\n# - Background Writer -\n\n#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round\n#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\nscanned/round\n\n\n$ free\n total used free shared buffers cached\nMem: 61368192 60988180 380012 0 784 44167172\n-/+ buffers/cache: 16820224 44547968\nSwap: 0 0 0\n\n$ top -c\ntop - 21:55:51 up 12 days, 12:41, 4 users, load average: 6.03, 16.10,\n24.15\ntop - 21:55:54 up 12 days, 12:41, 4 users, load average: 6.03, 15.94,\n24.06\nTasks: 837 total, 6 running, 831 sleeping, 0 stopped, 0 zombie\nCpu(s): 15.7%us, 1.7%sy, 0.0%ni, 81.6%id, 0.3%wa, 0.0%hi, 0.6%si,\n 0.0%st\nMem: 61368192k total, 54820988k used, 6547204k free, 9032k buffer\n\n[ec2-user@ip-10-155-231-112 ~]$ sudo iostat\nLinux 3.2.39-6.88.amzn1.x86_64 () 04/01/2013 _x86_64_ (32 CPU)\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 21.00 0.00 1.10 0.26 0.00 77.63\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nxvda 0.21 5.00 2.22 5411830 2401368\nxvdk 98.32 1774.67 969.86 1919359965 1048932113\nxvdj 98.28 1773.68 969.14 1918288697 1048156776\nxvdi 98.29 1773.69 969.61 1918300250 1048662470\nxvdh 98.24 1773.92 967.54 1918544618 1046419936\nxvdg 98.27 1774.15 968.85 1918790636 1047842846\nxvdf 98.32 1775.56 968.69 1920316435 1047668172\nmd127 733.85 10645.68 5813.70 11513598393 6287682313\n\nWhat bugs me on this is the throughput percentage on the volumes in\nCloudwatch is 100% on all volumes.\n\nThe problems seem to overlap with checkpoints.\n\n2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35\nUTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35\nUTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0\ntransaction log file(s) added, 0 removed, 1 recycled; write=539.439 s,\nsync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000\ns\",,,,,,,,,\"\"\n2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35\nUTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n\n[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]\nHi there,\nI'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem. \nWhat's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8) and switched to a new instance with a RAID-0 volume array. The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200. \nWithin a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres. \nThe instance still idles at this point. The only clue I could find was that it usually starts a few minutes after the checkpoint entries appear in my logs. \nAny suggestions would really be appreciated. It's killing our business at the moment. I can supply more info if required but pasted what I thought would be useful below. Not sure what else to change in the settings. \n\nKind regards,Armand\nIt's on Amazon EC2 - \n* cc2.8xlarge instance type * 6 volumes in RAID-0 configuration. (1000 PIOPS) \n60.5 GiB of memory\n88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)3370 GB of instance storage\n64-bit platformI/O Performance: Very High (10 Gigabit Ethernet)\nEBS-Optimized Available: No**API name: cc2.8xlarge\n\npostgresql.conffsync = offfull_page_writes = off\ndefault_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\neffective_cache_size = 48GBwork_mem = 64MBwal_buffers = -1\n\ncheckpoint_segments = 128shared_buffers = 32GBmax_connections = 800\n\neffective_io_concurrency = 3 # Down from 6\n# - Background Writer -#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round\n$ free total used free shared buffers cached\nMem: 61368192 60988180 380012 0 784 44167172-/+ buffers/cache: 16820224 44547968\nSwap: 0 0 0$ top -c \ntop - 21:55:51 up 12 days, 12:41, 4 users, load average: 6.03, 16.10, 24.15top - 21:55:54 up 12 days, 12:41, 4 users, load average: 6.03, 15.94, 24.06\nTasks: 837 total, 6 running, 831 sleeping, 0 stopped, 0 zombieCpu(s): 15.7%us, 1.7%sy, 0.0%ni, 81.6%id, 0.3%wa, 0.0%hi, 0.6%si, 0.0%st\nMem: 61368192k total, 54820988k used, 6547204k free, 9032k buffer[ec2-user@ip-10-155-231-112 ~]$ sudo iostat\nLinux 3.2.39-6.88.amzn1.x86_64 () 04/01/2013 _x86_64_ (32 CPU)\navg-cpu: %user %nice %system %iowait %steal %idle 21.00 0.00 1.10 0.26 0.00 77.63\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnxvda 0.21 5.00 2.22 5411830 2401368\nxvdk 98.32 1774.67 969.86 1919359965 1048932113xvdj 98.28 1773.68 969.14 1918288697 1048156776\nxvdi 98.29 1773.69 969.61 1918300250 1048662470xvdh 98.24 1773.92 967.54 1918544618 1046419936\nxvdg 98.27 1774.15 968.85 1918790636 1047842846xvdf 98.32 1775.56 968.69 1920316435 1047668172\nmd127 733.85 10645.68 5813.70 11513598393 6287682313What bugs me on this is the throughput percentage on the volumes in Cloudwatch is 100% on all volumes. \nThe problems seem to overlap with checkpoints. 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s\",,,,,,,,,\"\"\n2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"",
"msg_date": "Tue, 2 Apr 2013 00:35:32 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems with pg_locks explosion"
},
{
"msg_contents": "Hello, i think that your system during the checkpoint pauses all clients in\norder to flush all data from controller's cache to the disks if i were you\ni'd try to tune my checkpoint parameters better, if that doesn't work, show\nus some vmstat output please\n\nVasilis Ventirozos\n---------- Forwarded message ----------\nFrom: \"Armand du Plessis\" <[email protected]>\nDate: Apr 2, 2013 1:37 AM\nSubject: [PERFORM] Problems with pg_locks explosion\nTo: \"pgsql-performance\" <[email protected]>\nCc:\n\n[Apologies, I first sent this to the incorrect list, postgres-admin, in the\nevent you receive it twice]\n\nHi there,\n\nI'm hoping someone on the list can shed some light on an issue I'm having\nwith our Postgresql cluster. I'm literally tearing out my hair and don't\nhave a deep enough understanding of Postgres to find the problem.\n\nWhat's happening is I had severe disk/io issues on our original Postgres\ncluster (9.0.8) and switched to a new instance with a RAID-0 volume array.\nThe machine's CPU usage would hover around 30% and our database would run\nlightning fast with pg_locks hovering between 100-200.\n\nWithin a few seconds something would trigger a massive increase in pg_locks\nso that it suddenly shoots up to 4000-8000. At this point everything dies.\nQueries that usually take a few milliseconds takes minutes and everything\nis unresponsive until I restart postgres.\n\nThe instance still idles at this point. The only clue I could find was that\nit usually starts a few minutes after the checkpoint entries appear in my\nlogs.\n\nAny suggestions would really be appreciated. It's killing our business at\nthe moment. I can supply more info if required but pasted what I thought\nwould be useful below. Not sure what else to change in the settings.\n\nKind regards,\n\nArmand\n\n\n\nIt's on Amazon EC2 -\n* cc2.8xlarge instance type\n* 6 volumes in RAID-0 configuration. (1000 PIOPS)\n\n60.5 GiB of memory\n88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)\n3370 GB of instance storage\n64-bit platform\nI/O Performance: Very High (10 Gigabit Ethernet)\nEBS-Optimized Available: No**\nAPI name: cc2.8xlarge\n\n\npostgresql.conf\nfsync = off\nfull_page_writes = off\ndefault_statistics_target = 100\nmaintenance_work_mem = 1GB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 48GB\nwork_mem = 64MB\nwal_buffers = -1\n checkpoint_segments = 128\nshared_buffers = 32GB\nmax_connections = 800\n effective_io_concurrency = 3 # Down from 6\n\n# - Background Writer -\n\n#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round\n#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\nscanned/round\n\n\n$ free\n total used free shared buffers cached\nMem: 61368192 60988180 380012 0 784 44167172\n-/+ buffers/cache: 16820224 44547968\nSwap: 0 0 0\n\n$ top -c\ntop - 21:55:51 up 12 days, 12:41, 4 users, load average: 6.03, 16.10,\n24.15\ntop - 21:55:54 up 12 days, 12:41, 4 users, load average: 6.03, 15.94,\n24.06\nTasks: 837 total, 6 running, 831 sleeping, 0 stopped, 0 zombie\nCpu(s): 15.7%us, 1.7%sy, 0.0%ni, 81.6%id, 0.3%wa, 0.0%hi, 0.6%si,\n 0.0%st\nMem: 61368192k total, 54820988k used, 6547204k free, 9032k buffer\n\n[ec2-user@ip-10-155-231-112 ~]$ sudo iostat\nLinux 3.2.39-6.88.amzn1.x86_64 () 04/01/2013 _x86_64_ (32 CPU)\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 21.00 0.00 1.10 0.26 0.00 77.63\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nxvda 0.21 5.00 2.22 5411830 2401368\nxvdk 98.32 1774.67 969.86 1919359965 1048932113\nxvdj 98.28 1773.68 969.14 1918288697 1048156776\nxvdi 98.29 1773.69 969.61 1918300250 1048662470\nxvdh 98.24 1773.92 967.54 1918544618 1046419936\nxvdg 98.27 1774.15 968.85 1918790636 1047842846\nxvdf 98.32 1775.56 968.69 1920316435 1047668172\nmd127 733.85 10645.68 5813.70 11513598393 6287682313\n\nWhat bugs me on this is the throughput percentage on the volumes in\nCloudwatch is 100% on all volumes.\n\nThe problems seem to overlap with checkpoints.\n\n2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35\nUTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35\nUTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0\ntransaction log file(s) added, 0 removed, 1 recycled; write=539.439 s,\nsync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000\ns\",,,,,,,,,\"\"\n2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35\nUTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n\nHello, i think that your system during the checkpoint pauses all clients in order to flush all data from controller's cache to the disks if i were you i'd try to tune my checkpoint parameters better, if that doesn't work, show us some vmstat output please \nVasilis Ventirozos\n---------- Forwarded message ----------From: \"Armand du Plessis\" <[email protected]>Date: Apr 2, 2013 1:37 AMSubject: [PERFORM] Problems with pg_locks explosion\nTo: \"pgsql-performance\" <[email protected]>Cc: [Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]\n\nHi there,\nI'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem. \nWhat's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8) and switched to a new instance with a RAID-0 volume array. The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200. \nWithin a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres. \nThe instance still idles at this point. The only clue I could find was that it usually starts a few minutes after the checkpoint entries appear in my logs. \nAny suggestions would really be appreciated. It's killing our business at the moment. I can supply more info if required but pasted what I thought would be useful below. Not sure what else to change in the settings. \n\nKind regards,Armand\nIt's on Amazon EC2 - \n* cc2.8xlarge instance type * 6 volumes in RAID-0 configuration. (1000 PIOPS) \n60.5 GiB of memory\n88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)3370 GB of instance storage\n64-bit platformI/O Performance: Very High (10 Gigabit Ethernet)\nEBS-Optimized Available: No**API name: cc2.8xlarge\n\npostgresql.conffsync = offfull_page_writes = off\ndefault_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\neffective_cache_size = 48GBwork_mem = 64MBwal_buffers = -1\n\ncheckpoint_segments = 128shared_buffers = 32GBmax_connections = 800\n\neffective_io_concurrency = 3 # Down from 6\n# - Background Writer -#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round\n$ free total used free shared buffers cached\nMem: 61368192 60988180 380012 0 784 44167172-/+ buffers/cache: 16820224 44547968\nSwap: 0 0 0$ top -c \ntop - 21:55:51 up 12 days, 12:41, 4 users, load average: 6.03, 16.10, 24.15top - 21:55:54 up 12 days, 12:41, 4 users, load average: 6.03, 15.94, 24.06\nTasks: 837 total, 6 running, 831 sleeping, 0 stopped, 0 zombieCpu(s): 15.7%us, 1.7%sy, 0.0%ni, 81.6%id, 0.3%wa, 0.0%hi, 0.6%si, 0.0%st\nMem: 61368192k total, 54820988k used, 6547204k free, 9032k buffer[ec2-user@ip-10-155-231-112 ~]$ sudo iostat\nLinux 3.2.39-6.88.amzn1.x86_64 () 04/01/2013 _x86_64_ (32 CPU)\navg-cpu: %user %nice %system %iowait %steal %idle 21.00 0.00 1.10 0.26 0.00 77.63\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnxvda 0.21 5.00 2.22 5411830 2401368\nxvdk 98.32 1774.67 969.86 1919359965 1048932113xvdj 98.28 1773.68 969.14 1918288697 1048156776\nxvdi 98.29 1773.69 969.61 1918300250 1048662470xvdh 98.24 1773.92 967.54 1918544618 1046419936\nxvdg 98.27 1774.15 968.85 1918790636 1047842846xvdf 98.32 1775.56 968.69 1920316435 1047668172\nmd127 733.85 10645.68 5813.70 11513598393 6287682313What bugs me on this is the throughput percentage on the volumes in Cloudwatch is 100% on all volumes. \nThe problems seem to overlap with checkpoints. 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s\",,,,,,,,,\"\"\n2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"",
"msg_date": "Tue, 2 Apr 2013 01:56:41 +0300",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Fwd: Problems with pg_locks explosion"
},
{
"msg_contents": "Thanks for the reply.\n\nI've now updated the background writer settings to:\n\n# - Background Writer -\n\nbgwriter_delay = 200ms # 10-10000ms between rounds\nbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/round\nbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\nscanned/round\n\ncheckpoint_segments = 128\ncheckpoint_timeout = 25min\n\nIt's still happening at the moment, this time without any checkpoint\nentries in the log :(\n\nBelow the output from vmstat. I'm not sure what to look for in there?\n\nThanks again,\n\nArmand\n\n\n$ sudo vmstat 5\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 2 0 0 485800 4224 44781700 0 0 167 91 1 0 21 1\n78 0 0\n 7 0 0 353920 4224 44836176 0 0 6320 54 21371 12921 11 2\n87 0 0\n32 0 0 352220 4232 44749544 0 0 1110 8 19414 9620 6 42\n52 0 0\n 3 0 0 363044 4232 44615772 0 0 59 1943 11185 3774 0 81\n18 0 0\n48 0 0 360076 4240 44550744 0 0 0 34 9563 5210 0 74\n26 0 0\n33 0 0 413708 4240 44438248 0 0 92 962 11250 8169 0 61\n39 0 0\n109 0 0 418080 4240 44344596 0 0 605 3490 10098 6216 1 49\n50 0 0\n58 0 0 425388 4240 44286528 0 0 5 10 10794 2470 1 91\n 8 0 0\n53 0 0 435864 4240 44243000 0 0 11 0 9755 2428 0 92\n 8 0 0\n12 0 0 440792 4248 44213164 0 0 134 5 7883 3038 0 51\n49 0 0\n 3 0 0 440360 4256 44158684 0 0 548 146 8450 3930 2 27\n70 0 0\n 2 0 0 929236 4256 44248608 0 0 10466 845 22575 14196 20 5\n74 0 0\n 4 0 0 859160 4256 44311828 0 0 7120 61 20890 12835 12 1\n86 0 0\n 4 0 0 685308 4256 44369404 0 0 6110 24 20645 12545 13 1\n85 0 0\n 4 0 0 695440 4256 44396304 0 0 5351 1208 19529 11781 11 1\n88 0 0\n 4 0 0 628276 4256 44468116 0 0 9202 0 19875 12172 9 1\n89 0 0\n 6 0 0 579716 4256 44503848 0 0 3799 22 19223 11772 10 1\n88 0 0\n 3 1 0 502948 4256 44539784 0 0 3721 6700 20620 11939 13 1\n85 0 0\n 4 0 0 414120 4256 44583456 0 0 3860 856 19801 12092 10 1\n89 0 0\n 6 0 0 349240 4256 44642880 0 0 6122 48 19834 11933 11 2\n87 0 0\n 3 0 0 400536 4256 44535872 0 0 6287 5 18945 11461 10 1\n89 0 0\n 3 0 0 364256 4256 44592412 0 0 5487 2018 20145 12344 11 1\n87 0 0\n 7 0 0 343732 4256 44598784 0 0 4209 24 19099 11482 10 1\n88 0 0\n 6 0 0 339608 4236 44576768 0 0 6805 151 18821 11333 9 2\n89 0 0\n 9 1 0 339364 4236 44556884 0 0 2597 4339 19205 11918 11 3\n85 0 0\n24 0 0 341596 4236 44480368 0 0 6165 5309 19353 11562 11 4\n84 1 0\n30 0 0 359044 4236 44416452 0 0 1364 6 12638 6138 5 28\n67 0 0\n 4 0 0 436468 4224 44326500 0 0 3704 1264 11346 7545 4 27\n68 0 0\n 3 1 0 459736 4224 44384788 0 0 6541 8 20159 12097 11 1\n88 0 0\n 8 1 0 347812 4224 44462100 0 0 12292 2860 20851 12377 9 1\n89 1 0\n 1 0 0 379752 4224 44402396 0 0 5849 147 20171 12253 11 1\n88 0 0\n 4 0 0 453692 4216 44243480 0 0 6546 269 20689 13028 12 2\n86 0 0\n 8 0 0 390160 4216 44259768 0 0 4243 0 20476 21238 6 16\n78 0 0\n 6 0 0 344504 4216 44336264 0 0 7214 2 20919 12625 11 1\n87 0 0\n 4 0 0 350128 4200 44324976 0 0 10726 2173 20417 12351 10 1\n88 0 0\n 2 1 0 362300 4200 44282484 0 0 7148 714 22469 14468 12 2\n86 0 0\n 3 0 0 366252 4184 44311680 0 0 7617 133 20487 12364 9 1\n90 0 0\n 6 0 0 368904 4184 44248152 0 0 5162 6 22910 15221 14 7\n80 0 0\n 2 0 0 383108 4184 44276780 0 0 5846 1120 21109 12563 11 1\n88 0 0\n 7 0 0 338348 4184 44274472 0 0 9270 5 21243 12698 10 1\n88 0 0\n24 0 0 339676 4184 44213036 0 0 6639 18 22976 12700 13 12\n74 0 0\n12 0 0 371848 4184 44146500 0 0 657 133 18968 7445 5 53\n43 0 0\n37 0 0 374516 4184 44076212 0 0 16 2 9156 4472 1 48\n52 0 0\n16 0 0 398412 4184 43971060 0 0 127 0 9967 6018 0 48\n52 0 0\n 4 0 0 417312 4184 44084392 0 0 17434 1072 23661 14268 16 6\n78 1 0\n 4 0 0 407672 4184 44139896 0 0 5785 0 19779 11869 11 1\n88 0 0\n 9 0 0 349544 4184 44051596 0 0 6899 8 20376 12774 10 3\n88 0 0\n 5 0 0 424628 4184 44059628 0 0 9105 175 24546 15354 13 20\n66 1 0\n 2 0 0 377164 4184 44070564 0 0 9363 3 21191 12608 11 2\n87 0 0\n 5 0 0 353360 4184 44040804 0 0 6661 0 20931 12815 12 2\n85 0 0\n 4 0 0 355144 4180 44034620 0 0 7061 8 21264 12379 11 3\n86 0 0\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n21 0 0 358396 4180 43958420 0 0 7595 1749 23258 12299 10 27\n63 0 0\n 6 1 0 437480 4160 43922152 0 0 17565 14 17059 14928 6 18\n74 2 0\n 6 0 0 380304 4160 43993932 0 0 10120 168 21519 12798 11 2\n87 0 0\n 8 0 0 337740 4160 44007432 0 0 6033 520 20872 12461 11 1\n88 0 0\n13 0 0 349712 4132 43927784 0 0 6777 6 20919 12568 11 2\n86 0 0\n 6 1 0 351180 4112 43899756 0 0 8640 0 22543 12519 11 10\n78 0 0\n 6 0 0 356392 4112 43921532 0 0 6206 48 20383 12050 12 1\n86 0 0\n 6 0 0 355552 4108 43863448 0 0 6106 3 21244 11817 9 9\n82 0 0\n 3 0 0 364992 7312 43856824 0 0 11283 199 21296 12638 13 2\n85 0 0\n 4 1 0 371968 7120 43818552 0 0 6715 1534 22322 13305 11 7\n81 0 0\ndebug2: channel 0: window 999365 sent adjust 49211\n12 0 0 338540 7120 43822256 0 0 9142 3 21520 12194 13 5\n82 0 0\n 8 0 0 386016 7112 43717136 0 0 2123 3 20465 11466 8 20\n72 0 0\n 8 0 0 352388 7112 43715872 0 0 10366 51 25758 13879 16 19\n65 0 0\n20 0 0 351472 7112 43701060 0 0 13091 10 23766 12832 11 11\n77 1 0\n 2 0 0 386820 7112 43587520 0 0 482 210 17187 6773 3 69\n28 0 0\n64 0 0 401956 7112 43473728 0 0 0 5 10796 9487 0 55\n44 0 0\n\n\nOn Tue, Apr 2, 2013 at 12:56 AM, Vasilis Ventirozos\n<[email protected]>wrote:\n\n> Hello, i think that your system during the checkpoint pauses all clients\n> in order to flush all data from controller's cache to the disks if i were\n> you i'd try to tune my checkpoint parameters better, if that doesn't work,\n> show us some vmstat output please\n>\n> Vasilis Ventirozos\n> ---------- Forwarded message ----------\n> From: \"Armand du Plessis\" <[email protected]>\n> Date: Apr 2, 2013 1:37 AM\n> Subject: [PERFORM] Problems with pg_locks explosion\n> To: \"pgsql-performance\" <[email protected]>\n> Cc:\n>\n> [Apologies, I first sent this to the incorrect list, postgres-admin, in\n> the event you receive it twice]\n>\n> Hi there,\n>\n> I'm hoping someone on the list can shed some light on an issue I'm having\n> with our Postgresql cluster. I'm literally tearing out my hair and don't\n> have a deep enough understanding of Postgres to find the problem.\n>\n> What's happening is I had severe disk/io issues on our original Postgres\n> cluster (9.0.8) and switched to a new instance with a RAID-0 volume array.\n> The machine's CPU usage would hover around 30% and our database would run\n> lightning fast with pg_locks hovering between 100-200.\n>\n> Within a few seconds something would trigger a massive increase in\n> pg_locks so that it suddenly shoots up to 4000-8000. At this point\n> everything dies. Queries that usually take a few milliseconds takes minutes\n> and everything is unresponsive until I restart postgres.\n>\n> The instance still idles at this point. The only clue I could find was\n> that it usually starts a few minutes after the checkpoint entries appear in\n> my logs.\n>\n> Any suggestions would really be appreciated. It's killing our business at\n> the moment. I can supply more info if required but pasted what I thought\n> would be useful below. Not sure what else to change in the settings.\n>\n> Kind regards,\n>\n> Armand\n>\n>\n>\n> It's on Amazon EC2 -\n> * cc2.8xlarge instance type\n> * 6 volumes in RAID-0 configuration. (1000 PIOPS)\n>\n> 60.5 GiB of memory\n> 88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)\n> 3370 GB of instance storage\n> 64-bit platform\n> I/O Performance: Very High (10 Gigabit Ethernet)\n> EBS-Optimized Available: No**\n> API name: cc2.8xlarge\n>\n>\n> postgresql.conf\n> fsync = off\n> full_page_writes = off\n> default_statistics_target = 100\n> maintenance_work_mem = 1GB\n> checkpoint_completion_target = 0.9\n> effective_cache_size = 48GB\n> work_mem = 64MB\n> wal_buffers = -1\n> checkpoint_segments = 128\n> shared_buffers = 32GB\n> max_connections = 800\n> effective_io_concurrency = 3 # Down from 6\n>\n> # - Background Writer -\n>\n> #bgwriter_delay = 200ms # 10-10000ms between rounds\n> #bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round\n> #bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\n> scanned/round\n>\n>\n> $ free\n> total used free shared buffers cached\n> Mem: 61368192 60988180 380012 0 784 44167172\n> -/+ buffers/cache: 16820224 44547968\n> Swap: 0 0 0\n>\n> $ top -c\n> top - 21:55:51 up 12 days, 12:41, 4 users, load average: 6.03, 16.10,\n> 24.15\n> top - 21:55:54 up 12 days, 12:41, 4 users, load average: 6.03, 15.94,\n> 24.06\n> Tasks: 837 total, 6 running, 831 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 15.7%us, 1.7%sy, 0.0%ni, 81.6%id, 0.3%wa, 0.0%hi, 0.6%si,\n> 0.0%st\n> Mem: 61368192k total, 54820988k used, 6547204k free, 9032k buffer\n>\n> [ec2-user@ip-10-155-231-112 ~]$ sudo iostat\n> Linux 3.2.39-6.88.amzn1.x86_64 () 04/01/2013 _x86_64_ (32 CPU)\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 21.00 0.00 1.10 0.26 0.00 77.63\n>\n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> xvda 0.21 5.00 2.22 5411830 2401368\n> xvdk 98.32 1774.67 969.86 1919359965 1048932113\n> xvdj 98.28 1773.68 969.14 1918288697 1048156776\n> xvdi 98.29 1773.69 969.61 1918300250 1048662470\n> xvdh 98.24 1773.92 967.54 1918544618 1046419936\n> xvdg 98.27 1774.15 968.85 1918790636 1047842846\n> xvdf 98.32 1775.56 968.69 1920316435 1047668172\n> md127 733.85 10645.68 5813.70 11513598393 6287682313\n>\n> What bugs me on this is the throughput percentage on the volumes in\n> Cloudwatch is 100% on all volumes.\n>\n> The problems seem to overlap with checkpoints.\n>\n> 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35\n> UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n> 2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35\n> UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0\n> transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s,\n> sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000\n> s\",,,,,,,,,\"\"\n> 2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35\n> UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n>\n>\n>\n>\n>\n\nThanks for the reply. I've now updated the background writer settings to:# - Background Writer -bgwriter_delay = 200ms # 10-10000ms between rounds\nbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/roundbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/roundcheckpoint_segments = 128\ncheckpoint_timeout = 25min It's still happening at the moment, this time without any checkpoint entries in the log :(Below the output from vmstat. I'm not sure what to look for in there?\nThanks again,Armand$ sudo vmstat 5procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 0 485800 4224 44781700 0 0 167 91 1 0 21 1 78 0 0 \n 7 0 0 353920 4224 44836176 0 0 6320 54 21371 12921 11 2 87 0 0 32 0 0 352220 4232 44749544 0 0 1110 8 19414 9620 6 42 52 0 0 \n 3 0 0 363044 4232 44615772 0 0 59 1943 11185 3774 0 81 18 0 0 48 0 0 360076 4240 44550744 0 0 0 34 9563 5210 0 74 26 0 0 \n33 0 0 413708 4240 44438248 0 0 92 962 11250 8169 0 61 39 0 0 109 0 0 418080 4240 44344596 0 0 605 3490 10098 6216 1 49 50 0 0 \n58 0 0 425388 4240 44286528 0 0 5 10 10794 2470 1 91 8 0 0 53 0 0 435864 4240 44243000 0 0 11 0 9755 2428 0 92 8 0 0 \n12 0 0 440792 4248 44213164 0 0 134 5 7883 3038 0 51 49 0 0 3 0 0 440360 4256 44158684 0 0 548 146 8450 3930 2 27 70 0 0 \n 2 0 0 929236 4256 44248608 0 0 10466 845 22575 14196 20 5 74 0 0 4 0 0 859160 4256 44311828 0 0 7120 61 20890 12835 12 1 86 0 0 \n 4 0 0 685308 4256 44369404 0 0 6110 24 20645 12545 13 1 85 0 0 4 0 0 695440 4256 44396304 0 0 5351 1208 19529 11781 11 1 88 0 0 \n 4 0 0 628276 4256 44468116 0 0 9202 0 19875 12172 9 1 89 0 0 6 0 0 579716 4256 44503848 0 0 3799 22 19223 11772 10 1 88 0 0 \n 3 1 0 502948 4256 44539784 0 0 3721 6700 20620 11939 13 1 85 0 0 4 0 0 414120 4256 44583456 0 0 3860 856 19801 12092 10 1 89 0 0 \n 6 0 0 349240 4256 44642880 0 0 6122 48 19834 11933 11 2 87 0 0 3 0 0 400536 4256 44535872 0 0 6287 5 18945 11461 10 1 89 0 0 \n 3 0 0 364256 4256 44592412 0 0 5487 2018 20145 12344 11 1 87 0 0 7 0 0 343732 4256 44598784 0 0 4209 24 19099 11482 10 1 88 0 0 \n 6 0 0 339608 4236 44576768 0 0 6805 151 18821 11333 9 2 89 0 0 9 1 0 339364 4236 44556884 0 0 2597 4339 19205 11918 11 3 85 0 0 \n24 0 0 341596 4236 44480368 0 0 6165 5309 19353 11562 11 4 84 1 0 30 0 0 359044 4236 44416452 0 0 1364 6 12638 6138 5 28 67 0 0 \n 4 0 0 436468 4224 44326500 0 0 3704 1264 11346 7545 4 27 68 0 0 3 1 0 459736 4224 44384788 0 0 6541 8 20159 12097 11 1 88 0 0 \n 8 1 0 347812 4224 44462100 0 0 12292 2860 20851 12377 9 1 89 1 0 1 0 0 379752 4224 44402396 0 0 5849 147 20171 12253 11 1 88 0 0 \n 4 0 0 453692 4216 44243480 0 0 6546 269 20689 13028 12 2 86 0 0 8 0 0 390160 4216 44259768 0 0 4243 0 20476 21238 6 16 78 0 0 \n 6 0 0 344504 4216 44336264 0 0 7214 2 20919 12625 11 1 87 0 0 4 0 0 350128 4200 44324976 0 0 10726 2173 20417 12351 10 1 88 0 0 \n 2 1 0 362300 4200 44282484 0 0 7148 714 22469 14468 12 2 86 0 0 3 0 0 366252 4184 44311680 0 0 7617 133 20487 12364 9 1 90 0 0 \n 6 0 0 368904 4184 44248152 0 0 5162 6 22910 15221 14 7 80 0 0 2 0 0 383108 4184 44276780 0 0 5846 1120 21109 12563 11 1 88 0 0 \n 7 0 0 338348 4184 44274472 0 0 9270 5 21243 12698 10 1 88 0 0 24 0 0 339676 4184 44213036 0 0 6639 18 22976 12700 13 12 74 0 0 \n12 0 0 371848 4184 44146500 0 0 657 133 18968 7445 5 53 43 0 0 37 0 0 374516 4184 44076212 0 0 16 2 9156 4472 1 48 52 0 0 \n16 0 0 398412 4184 43971060 0 0 127 0 9967 6018 0 48 52 0 0 4 0 0 417312 4184 44084392 0 0 17434 1072 23661 14268 16 6 78 1 0 \n 4 0 0 407672 4184 44139896 0 0 5785 0 19779 11869 11 1 88 0 0 9 0 0 349544 4184 44051596 0 0 6899 8 20376 12774 10 3 88 0 0 \n 5 0 0 424628 4184 44059628 0 0 9105 175 24546 15354 13 20 66 1 0 2 0 0 377164 4184 44070564 0 0 9363 3 21191 12608 11 2 87 0 0 \n 5 0 0 353360 4184 44040804 0 0 6661 0 20931 12815 12 2 85 0 0 4 0 0 355144 4180 44034620 0 0 7061 8 21264 12379 11 3 86 0 0 \nprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st\n21 0 0 358396 4180 43958420 0 0 7595 1749 23258 12299 10 27 63 0 0 6 1 0 437480 4160 43922152 0 0 17565 14 17059 14928 6 18 74 2 0 \n 6 0 0 380304 4160 43993932 0 0 10120 168 21519 12798 11 2 87 0 0 8 0 0 337740 4160 44007432 0 0 6033 520 20872 12461 11 1 88 0 0 \n13 0 0 349712 4132 43927784 0 0 6777 6 20919 12568 11 2 86 0 0 6 1 0 351180 4112 43899756 0 0 8640 0 22543 12519 11 10 78 0 0 \n 6 0 0 356392 4112 43921532 0 0 6206 48 20383 12050 12 1 86 0 0 6 0 0 355552 4108 43863448 0 0 6106 3 21244 11817 9 9 82 0 0 \n 3 0 0 364992 7312 43856824 0 0 11283 199 21296 12638 13 2 85 0 0 4 1 0 371968 7120 43818552 0 0 6715 1534 22322 13305 11 7 81 0 0 \ndebug2: channel 0: window 999365 sent adjust 4921112 0 0 338540 7120 43822256 0 0 9142 3 21520 12194 13 5 82 0 0 \n 8 0 0 386016 7112 43717136 0 0 2123 3 20465 11466 8 20 72 0 0 8 0 0 352388 7112 43715872 0 0 10366 51 25758 13879 16 19 65 0 0 \n20 0 0 351472 7112 43701060 0 0 13091 10 23766 12832 11 11 77 1 0 2 0 0 386820 7112 43587520 0 0 482 210 17187 6773 3 69 28 0 0 \n64 0 0 401956 7112 43473728 0 0 0 5 10796 9487 0 55 44 0 0On Tue, Apr 2, 2013 at 12:56 AM, Vasilis Ventirozos <[email protected]> wrote:\nHello, i think that your system during the checkpoint pauses all clients in order to flush all data from controller's cache to the disks if i were you i'd try to tune my checkpoint parameters better, if that doesn't work, show us some vmstat output please \n\nVasilis Ventirozos\n---------- Forwarded message ----------From: \"Armand du Plessis\" <[email protected]>Date: Apr 2, 2013 1:37 AMSubject: [PERFORM] Problems with pg_locks explosion\n\n\nTo: \"pgsql-performance\" <[email protected]>Cc: [Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]\n\nHi there,\nI'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem. \nWhat's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8) and switched to a new instance with a RAID-0 volume array. The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200. \nWithin a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres. \nThe instance still idles at this point. The only clue I could find was that it usually starts a few minutes after the checkpoint entries appear in my logs. \nAny suggestions would really be appreciated. It's killing our business at the moment. I can supply more info if required but pasted what I thought would be useful below. Not sure what else to change in the settings. \n\nKind regards,Armand\nIt's on Amazon EC2 - \n* cc2.8xlarge instance type * 6 volumes in RAID-0 configuration. (1000 PIOPS) \n60.5 GiB of memory\n88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)3370 GB of instance storage\n64-bit platformI/O Performance: Very High (10 Gigabit Ethernet)\nEBS-Optimized Available: No**API name: cc2.8xlarge\n\npostgresql.conffsync = offfull_page_writes = off\ndefault_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\neffective_cache_size = 48GBwork_mem = 64MBwal_buffers = -1\n\ncheckpoint_segments = 128shared_buffers = 32GBmax_connections = 800\n\neffective_io_concurrency = 3 # Down from 6\n# - Background Writer -#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round\n$ free total used free shared buffers cached\nMem: 61368192 60988180 380012 0 784 44167172-/+ buffers/cache: 16820224 44547968\nSwap: 0 0 0$ top -c \ntop - 21:55:51 up 12 days, 12:41, 4 users, load average: 6.03, 16.10, 24.15top - 21:55:54 up 12 days, 12:41, 4 users, load average: 6.03, 15.94, 24.06\nTasks: 837 total, 6 running, 831 sleeping, 0 stopped, 0 zombieCpu(s): 15.7%us, 1.7%sy, 0.0%ni, 81.6%id, 0.3%wa, 0.0%hi, 0.6%si, 0.0%st\nMem: 61368192k total, 54820988k used, 6547204k free, 9032k buffer[ec2-user@ip-10-155-231-112 ~]$ sudo iostat\nLinux 3.2.39-6.88.amzn1.x86_64 () 04/01/2013 _x86_64_ (32 CPU)\navg-cpu: %user %nice %system %iowait %steal %idle 21.00 0.00 1.10 0.26 0.00 77.63\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnxvda 0.21 5.00 2.22 5411830 2401368\nxvdk 98.32 1774.67 969.86 1919359965 1048932113xvdj 98.28 1773.68 969.14 1918288697 1048156776\nxvdi 98.29 1773.69 969.61 1918300250 1048662470xvdh 98.24 1773.92 967.54 1918544618 1046419936\nxvdg 98.27 1774.15 968.85 1918790636 1047842846xvdf 98.32 1775.56 968.69 1920316435 1047668172\nmd127 733.85 10645.68 5813.70 11513598393 6287682313What bugs me on this is the throughput percentage on the volumes in Cloudwatch is 100% on all volumes. \nThe problems seem to overlap with checkpoints. 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s\",,,,,,,,,\"\"\n2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"",
"msg_date": "Tue, 2 Apr 2013 01:09:42 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "Apologies, the checkpoint log entry was a few seconds after I sent this\nemail. Now pg_locks on 7000.\n\nAnd vmstat:\n\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 7 0 0 351212 7032 37412872 0 0 167 91 0 1 21 1\n78 0 0\n 4 0 0 350600 7040 37374464 0 0 6077 332 12889 6634 7 3\n90 0 0\n40 0 0 346244 7040 37310604 0 0 3687 2638 16355 5517 7 31\n61 0 0\n27 0 0 385620 7040 37206560 0 0 69 1587 14483 4108 3 75\n22 0 0\n\n\nOn Tue, Apr 2, 2013 at 1:09 AM, Armand du Plessis <[email protected]> wrote:\n\n> Thanks for the reply.\n>\n> I've now updated the background writer settings to:\n>\n> # - Background Writer -\n>\n> bgwriter_delay = 200ms # 10-10000ms between rounds\n> bgwriter_lru_maxpages = 400 # 0-1000 max buffers written/round\n> bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\n> scanned/round\n>\n> checkpoint_segments = 128\n> checkpoint_timeout = 25min\n>\n> It's still happening at the moment, this time without any checkpoint\n> entries in the log :(\n>\n> Below the output from vmstat. I'm not sure what to look for in there?\n>\n> Thanks again,\n>\n> Armand\n>\n>\n> $ sudo vmstat 5\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu-----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa st\n> 2 0 0 485800 4224 44781700 0 0 167 91 1 0 21 1\n> 78 0 0\n> 7 0 0 353920 4224 44836176 0 0 6320 54 21371 12921 11\n> 2 87 0 0\n> 32 0 0 352220 4232 44749544 0 0 1110 8 19414 9620 6 42\n> 52 0 0\n> 3 0 0 363044 4232 44615772 0 0 59 1943 11185 3774 0 81\n> 18 0 0\n> 48 0 0 360076 4240 44550744 0 0 0 34 9563 5210 0 74\n> 26 0 0\n> 33 0 0 413708 4240 44438248 0 0 92 962 11250 8169 0 61\n> 39 0 0\n> 109 0 0 418080 4240 44344596 0 0 605 3490 10098 6216 1\n> 49 50 0 0\n> 58 0 0 425388 4240 44286528 0 0 5 10 10794 2470 1 91\n> 8 0 0\n> 53 0 0 435864 4240 44243000 0 0 11 0 9755 2428 0 92\n> 8 0 0\n> 12 0 0 440792 4248 44213164 0 0 134 5 7883 3038 0 51\n> 49 0 0\n> 3 0 0 440360 4256 44158684 0 0 548 146 8450 3930 2 27\n> 70 0 0\n> 2 0 0 929236 4256 44248608 0 0 10466 845 22575 14196 20\n> 5 74 0 0\n> 4 0 0 859160 4256 44311828 0 0 7120 61 20890 12835 12\n> 1 86 0 0\n> 4 0 0 685308 4256 44369404 0 0 6110 24 20645 12545 13\n> 1 85 0 0\n> 4 0 0 695440 4256 44396304 0 0 5351 1208 19529 11781 11\n> 1 88 0 0\n> 4 0 0 628276 4256 44468116 0 0 9202 0 19875 12172 9\n> 1 89 0 0\n> 6 0 0 579716 4256 44503848 0 0 3799 22 19223 11772 10\n> 1 88 0 0\n> 3 1 0 502948 4256 44539784 0 0 3721 6700 20620 11939 13\n> 1 85 0 0\n> 4 0 0 414120 4256 44583456 0 0 3860 856 19801 12092 10\n> 1 89 0 0\n> 6 0 0 349240 4256 44642880 0 0 6122 48 19834 11933 11\n> 2 87 0 0\n> 3 0 0 400536 4256 44535872 0 0 6287 5 18945 11461 10\n> 1 89 0 0\n> 3 0 0 364256 4256 44592412 0 0 5487 2018 20145 12344 11\n> 1 87 0 0\n> 7 0 0 343732 4256 44598784 0 0 4209 24 19099 11482 10\n> 1 88 0 0\n> 6 0 0 339608 4236 44576768 0 0 6805 151 18821 11333 9\n> 2 89 0 0\n> 9 1 0 339364 4236 44556884 0 0 2597 4339 19205 11918 11\n> 3 85 0 0\n> 24 0 0 341596 4236 44480368 0 0 6165 5309 19353 11562 11\n> 4 84 1 0\n> 30 0 0 359044 4236 44416452 0 0 1364 6 12638 6138 5 28\n> 67 0 0\n> 4 0 0 436468 4224 44326500 0 0 3704 1264 11346 7545 4 27\n> 68 0 0\n> 3 1 0 459736 4224 44384788 0 0 6541 8 20159 12097 11\n> 1 88 0 0\n> 8 1 0 347812 4224 44462100 0 0 12292 2860 20851 12377 9\n> 1 89 1 0\n> 1 0 0 379752 4224 44402396 0 0 5849 147 20171 12253 11\n> 1 88 0 0\n> 4 0 0 453692 4216 44243480 0 0 6546 269 20689 13028 12\n> 2 86 0 0\n> 8 0 0 390160 4216 44259768 0 0 4243 0 20476 21238 6\n> 16 78 0 0\n> 6 0 0 344504 4216 44336264 0 0 7214 2 20919 12625 11\n> 1 87 0 0\n> 4 0 0 350128 4200 44324976 0 0 10726 2173 20417 12351 10\n> 1 88 0 0\n> 2 1 0 362300 4200 44282484 0 0 7148 714 22469 14468 12\n> 2 86 0 0\n> 3 0 0 366252 4184 44311680 0 0 7617 133 20487 12364 9\n> 1 90 0 0\n> 6 0 0 368904 4184 44248152 0 0 5162 6 22910 15221 14\n> 7 80 0 0\n> 2 0 0 383108 4184 44276780 0 0 5846 1120 21109 12563 11\n> 1 88 0 0\n> 7 0 0 338348 4184 44274472 0 0 9270 5 21243 12698 10\n> 1 88 0 0\n> 24 0 0 339676 4184 44213036 0 0 6639 18 22976 12700 13\n> 12 74 0 0\n> 12 0 0 371848 4184 44146500 0 0 657 133 18968 7445 5 53\n> 43 0 0\n> 37 0 0 374516 4184 44076212 0 0 16 2 9156 4472 1 48\n> 52 0 0\n> 16 0 0 398412 4184 43971060 0 0 127 0 9967 6018 0 48\n> 52 0 0\n> 4 0 0 417312 4184 44084392 0 0 17434 1072 23661 14268 16\n> 6 78 1 0\n> 4 0 0 407672 4184 44139896 0 0 5785 0 19779 11869 11\n> 1 88 0 0\n> 9 0 0 349544 4184 44051596 0 0 6899 8 20376 12774 10\n> 3 88 0 0\n> 5 0 0 424628 4184 44059628 0 0 9105 175 24546 15354 13\n> 20 66 1 0\n> 2 0 0 377164 4184 44070564 0 0 9363 3 21191 12608 11\n> 2 87 0 0\n> 5 0 0 353360 4184 44040804 0 0 6661 0 20931 12815 12\n> 2 85 0 0\n> 4 0 0 355144 4180 44034620 0 0 7061 8 21264 12379 11\n> 3 86 0 0\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> -----cpu-----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa st\n> 21 0 0 358396 4180 43958420 0 0 7595 1749 23258 12299 10\n> 27 63 0 0\n> 6 1 0 437480 4160 43922152 0 0 17565 14 17059 14928 6\n> 18 74 2 0\n> 6 0 0 380304 4160 43993932 0 0 10120 168 21519 12798 11\n> 2 87 0 0\n> 8 0 0 337740 4160 44007432 0 0 6033 520 20872 12461 11\n> 1 88 0 0\n> 13 0 0 349712 4132 43927784 0 0 6777 6 20919 12568 11\n> 2 86 0 0\n> 6 1 0 351180 4112 43899756 0 0 8640 0 22543 12519 11\n> 10 78 0 0\n> 6 0 0 356392 4112 43921532 0 0 6206 48 20383 12050 12\n> 1 86 0 0\n> 6 0 0 355552 4108 43863448 0 0 6106 3 21244 11817 9\n> 9 82 0 0\n> 3 0 0 364992 7312 43856824 0 0 11283 199 21296 12638 13\n> 2 85 0 0\n> 4 1 0 371968 7120 43818552 0 0 6715 1534 22322 13305 11\n> 7 81 0 0\n> debug2: channel 0: window 999365 sent adjust 49211\n> 12 0 0 338540 7120 43822256 0 0 9142 3 21520 12194 13\n> 5 82 0 0\n> 8 0 0 386016 7112 43717136 0 0 2123 3 20465 11466 8\n> 20 72 0 0\n> 8 0 0 352388 7112 43715872 0 0 10366 51 25758 13879 16\n> 19 65 0 0\n> 20 0 0 351472 7112 43701060 0 0 13091 10 23766 12832 11\n> 11 77 1 0\n> 2 0 0 386820 7112 43587520 0 0 482 210 17187 6773 3 69\n> 28 0 0\n> 64 0 0 401956 7112 43473728 0 0 0 5 10796 9487 0 55\n> 44 0 0\n>\n>\n> On Tue, Apr 2, 2013 at 12:56 AM, Vasilis Ventirozos <\n> [email protected]> wrote:\n>\n>> Hello, i think that your system during the checkpoint pauses all clients\n>> in order to flush all data from controller's cache to the disks if i were\n>> you i'd try to tune my checkpoint parameters better, if that doesn't work,\n>> show us some vmstat output please\n>>\n>> Vasilis Ventirozos\n>> ---------- Forwarded message ----------\n>> From: \"Armand du Plessis\" <[email protected]>\n>> Date: Apr 2, 2013 1:37 AM\n>> Subject: [PERFORM] Problems with pg_locks explosion\n>> To: \"pgsql-performance\" <[email protected]>\n>> Cc:\n>>\n>> [Apologies, I first sent this to the incorrect list, postgres-admin, in\n>> the event you receive it twice]\n>>\n>> Hi there,\n>>\n>> I'm hoping someone on the list can shed some light on an issue I'm having\n>> with our Postgresql cluster. I'm literally tearing out my hair and don't\n>> have a deep enough understanding of Postgres to find the problem.\n>>\n>> What's happening is I had severe disk/io issues on our original Postgres\n>> cluster (9.0.8) and switched to a new instance with a RAID-0 volume array.\n>> The machine's CPU usage would hover around 30% and our database would run\n>> lightning fast with pg_locks hovering between 100-200.\n>>\n>> Within a few seconds something would trigger a massive increase in\n>> pg_locks so that it suddenly shoots up to 4000-8000. At this point\n>> everything dies. Queries that usually take a few milliseconds takes minutes\n>> and everything is unresponsive until I restart postgres.\n>>\n>> The instance still idles at this point. The only clue I could find was\n>> that it usually starts a few minutes after the checkpoint entries appear in\n>> my logs.\n>>\n>> Any suggestions would really be appreciated. It's killing our business at\n>> the moment. I can supply more info if required but pasted what I thought\n>> would be useful below. Not sure what else to change in the settings.\n>>\n>> Kind regards,\n>>\n>> Armand\n>>\n>>\n>>\n>> It's on Amazon EC2 -\n>> * cc2.8xlarge instance type\n>> * 6 volumes in RAID-0 configuration. (1000 PIOPS)\n>>\n>> 60.5 GiB of memory\n>> 88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)\n>> 3370 GB of instance storage\n>> 64-bit platform\n>> I/O Performance: Very High (10 Gigabit Ethernet)\n>> EBS-Optimized Available: No**\n>> API name: cc2.8xlarge\n>>\n>>\n>> postgresql.conf\n>> fsync = off\n>> full_page_writes = off\n>> default_statistics_target = 100\n>> maintenance_work_mem = 1GB\n>> checkpoint_completion_target = 0.9\n>> effective_cache_size = 48GB\n>> work_mem = 64MB\n>> wal_buffers = -1\n>> checkpoint_segments = 128\n>> shared_buffers = 32GB\n>> max_connections = 800\n>> effective_io_concurrency = 3 # Down from 6\n>>\n>> # - Background Writer -\n>>\n>> #bgwriter_delay = 200ms # 10-10000ms between rounds\n>> #bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round\n>> #bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\n>> scanned/round\n>>\n>>\n>> $ free\n>> total used free shared buffers cached\n>> Mem: 61368192 60988180 380012 0 784 44167172\n>> -/+ buffers/cache: 16820224 44547968\n>> Swap: 0 0 0\n>>\n>> $ top -c\n>> top - 21:55:51 up 12 days, 12:41, 4 users, load average: 6.03, 16.10,\n>> 24.15\n>> top - 21:55:54 up 12 days, 12:41, 4 users, load average: 6.03, 15.94,\n>> 24.06\n>> Tasks: 837 total, 6 running, 831 sleeping, 0 stopped, 0 zombie\n>> Cpu(s): 15.7%us, 1.7%sy, 0.0%ni, 81.6%id, 0.3%wa, 0.0%hi, 0.6%si,\n>> 0.0%st\n>> Mem: 61368192k total, 54820988k used, 6547204k free, 9032k buffer\n>>\n>> [ec2-user@ip-10-155-231-112 ~]$ sudo iostat\n>> Linux 3.2.39-6.88.amzn1.x86_64 () 04/01/2013 _x86_64_ (32 CPU)\n>>\n>> avg-cpu: %user %nice %system %iowait %steal %idle\n>> 21.00 0.00 1.10 0.26 0.00 77.63\n>>\n>> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n>> xvda 0.21 5.00 2.22 5411830 2401368\n>> xvdk 98.32 1774.67 969.86 1919359965 1048932113\n>> xvdj 98.28 1773.68 969.14 1918288697 1048156776\n>> xvdi 98.29 1773.69 969.61 1918300250 1048662470\n>> xvdh 98.24 1773.92 967.54 1918544618 1046419936\n>> xvdg 98.27 1774.15 968.85 1918790636 1047842846\n>> xvdf 98.32 1775.56 968.69 1920316435 1047668172\n>> md127 733.85 10645.68 5813.70 11513598393 6287682313\n>>\n>> What bugs me on this is the throughput percentage on the volumes in\n>> Cloudwatch is 100% on all volumes.\n>>\n>> The problems seem to overlap with checkpoints.\n>>\n>> 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35\n>> UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n>> 2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35\n>> UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0\n>> transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s,\n>> sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000\n>> s\",,,,,,,,,\"\"\n>> 2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35\n>> UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n>>\n>>\n>>\n>>\n>>\n>\n\nApologies, the checkpoint log entry was a few seconds after I sent this email. Now pg_locks on 7000. And vmstat:procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id wa st 7 0 0 351212 7032 37412872 0 0 167 91 0 1 21 1 78 0 0 \n 4 0 0 350600 7040 37374464 0 0 6077 332 12889 6634 7 3 90 0 0 40 0 0 346244 7040 37310604 0 0 3687 2638 16355 5517 7 31 61 0 0 \n27 0 0 385620 7040 37206560 0 0 69 1587 14483 4108 3 75 22 0 0On Tue, Apr 2, 2013 at 1:09 AM, Armand du Plessis <[email protected]> wrote:\nThanks for the reply. I've now updated the background writer settings to:\n\n# - Background Writer -bgwriter_delay = 200ms # 10-10000ms between rounds\nbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/roundbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round\ncheckpoint_segments = 128\ncheckpoint_timeout = 25min It's still happening at the moment, this time without any checkpoint entries in the log :(Below the output from vmstat. I'm not sure what to look for in there?\nThanks again,Armand$ sudo vmstat 5procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 0 485800 4224 44781700 0 0 167 91 1 0 21 1 78 0 0 \n 7 0 0 353920 4224 44836176 0 0 6320 54 21371 12921 11 2 87 0 0 32 0 0 352220 4232 44749544 0 0 1110 8 19414 9620 6 42 52 0 0 \n 3 0 0 363044 4232 44615772 0 0 59 1943 11185 3774 0 81 18 0 0 48 0 0 360076 4240 44550744 0 0 0 34 9563 5210 0 74 26 0 0 \n33 0 0 413708 4240 44438248 0 0 92 962 11250 8169 0 61 39 0 0 109 0 0 418080 4240 44344596 0 0 605 3490 10098 6216 1 49 50 0 0 \n58 0 0 425388 4240 44286528 0 0 5 10 10794 2470 1 91 8 0 0 53 0 0 435864 4240 44243000 0 0 11 0 9755 2428 0 92 8 0 0 \n12 0 0 440792 4248 44213164 0 0 134 5 7883 3038 0 51 49 0 0 3 0 0 440360 4256 44158684 0 0 548 146 8450 3930 2 27 70 0 0 \n 2 0 0 929236 4256 44248608 0 0 10466 845 22575 14196 20 5 74 0 0 4 0 0 859160 4256 44311828 0 0 7120 61 20890 12835 12 1 86 0 0 \n 4 0 0 685308 4256 44369404 0 0 6110 24 20645 12545 13 1 85 0 0 4 0 0 695440 4256 44396304 0 0 5351 1208 19529 11781 11 1 88 0 0 \n 4 0 0 628276 4256 44468116 0 0 9202 0 19875 12172 9 1 89 0 0 6 0 0 579716 4256 44503848 0 0 3799 22 19223 11772 10 1 88 0 0 \n 3 1 0 502948 4256 44539784 0 0 3721 6700 20620 11939 13 1 85 0 0 4 0 0 414120 4256 44583456 0 0 3860 856 19801 12092 10 1 89 0 0 \n 6 0 0 349240 4256 44642880 0 0 6122 48 19834 11933 11 2 87 0 0 3 0 0 400536 4256 44535872 0 0 6287 5 18945 11461 10 1 89 0 0 \n 3 0 0 364256 4256 44592412 0 0 5487 2018 20145 12344 11 1 87 0 0 7 0 0 343732 4256 44598784 0 0 4209 24 19099 11482 10 1 88 0 0 \n 6 0 0 339608 4236 44576768 0 0 6805 151 18821 11333 9 2 89 0 0 9 1 0 339364 4236 44556884 0 0 2597 4339 19205 11918 11 3 85 0 0 \n24 0 0 341596 4236 44480368 0 0 6165 5309 19353 11562 11 4 84 1 0 30 0 0 359044 4236 44416452 0 0 1364 6 12638 6138 5 28 67 0 0 \n 4 0 0 436468 4224 44326500 0 0 3704 1264 11346 7545 4 27 68 0 0 3 1 0 459736 4224 44384788 0 0 6541 8 20159 12097 11 1 88 0 0 \n 8 1 0 347812 4224 44462100 0 0 12292 2860 20851 12377 9 1 89 1 0 1 0 0 379752 4224 44402396 0 0 5849 147 20171 12253 11 1 88 0 0 \n 4 0 0 453692 4216 44243480 0 0 6546 269 20689 13028 12 2 86 0 0 8 0 0 390160 4216 44259768 0 0 4243 0 20476 21238 6 16 78 0 0 \n 6 0 0 344504 4216 44336264 0 0 7214 2 20919 12625 11 1 87 0 0 4 0 0 350128 4200 44324976 0 0 10726 2173 20417 12351 10 1 88 0 0 \n 2 1 0 362300 4200 44282484 0 0 7148 714 22469 14468 12 2 86 0 0 3 0 0 366252 4184 44311680 0 0 7617 133 20487 12364 9 1 90 0 0 \n 6 0 0 368904 4184 44248152 0 0 5162 6 22910 15221 14 7 80 0 0 2 0 0 383108 4184 44276780 0 0 5846 1120 21109 12563 11 1 88 0 0 \n 7 0 0 338348 4184 44274472 0 0 9270 5 21243 12698 10 1 88 0 0 24 0 0 339676 4184 44213036 0 0 6639 18 22976 12700 13 12 74 0 0 \n12 0 0 371848 4184 44146500 0 0 657 133 18968 7445 5 53 43 0 0 37 0 0 374516 4184 44076212 0 0 16 2 9156 4472 1 48 52 0 0 \n16 0 0 398412 4184 43971060 0 0 127 0 9967 6018 0 48 52 0 0 4 0 0 417312 4184 44084392 0 0 17434 1072 23661 14268 16 6 78 1 0 \n 4 0 0 407672 4184 44139896 0 0 5785 0 19779 11869 11 1 88 0 0 9 0 0 349544 4184 44051596 0 0 6899 8 20376 12774 10 3 88 0 0 \n 5 0 0 424628 4184 44059628 0 0 9105 175 24546 15354 13 20 66 1 0 2 0 0 377164 4184 44070564 0 0 9363 3 21191 12608 11 2 87 0 0 \n 5 0 0 353360 4184 44040804 0 0 6661 0 20931 12815 12 2 85 0 0 4 0 0 355144 4180 44034620 0 0 7061 8 21264 12379 11 3 86 0 0 \nprocs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st\n21 0 0 358396 4180 43958420 0 0 7595 1749 23258 12299 10 27 63 0 0 6 1 0 437480 4160 43922152 0 0 17565 14 17059 14928 6 18 74 2 0 \n 6 0 0 380304 4160 43993932 0 0 10120 168 21519 12798 11 2 87 0 0 8 0 0 337740 4160 44007432 0 0 6033 520 20872 12461 11 1 88 0 0 \n13 0 0 349712 4132 43927784 0 0 6777 6 20919 12568 11 2 86 0 0 6 1 0 351180 4112 43899756 0 0 8640 0 22543 12519 11 10 78 0 0 \n 6 0 0 356392 4112 43921532 0 0 6206 48 20383 12050 12 1 86 0 0 6 0 0 355552 4108 43863448 0 0 6106 3 21244 11817 9 9 82 0 0 \n 3 0 0 364992 7312 43856824 0 0 11283 199 21296 12638 13 2 85 0 0 4 1 0 371968 7120 43818552 0 0 6715 1534 22322 13305 11 7 81 0 0 \ndebug2: channel 0: window 999365 sent adjust 4921112 0 0 338540 7120 43822256 0 0 9142 3 21520 12194 13 5 82 0 0 \n 8 0 0 386016 7112 43717136 0 0 2123 3 20465 11466 8 20 72 0 0 8 0 0 352388 7112 43715872 0 0 10366 51 25758 13879 16 19 65 0 0 \n20 0 0 351472 7112 43701060 0 0 13091 10 23766 12832 11 11 77 1 0 2 0 0 386820 7112 43587520 0 0 482 210 17187 6773 3 69 28 0 0 \n64 0 0 401956 7112 43473728 0 0 0 5 10796 9487 0 55 44 0 0\nOn Tue, Apr 2, 2013 at 12:56 AM, Vasilis Ventirozos <[email protected]> wrote:\nHello, i think that your system during the checkpoint pauses all clients in order to flush all data from controller's cache to the disks if i were you i'd try to tune my checkpoint parameters better, if that doesn't work, show us some vmstat output please \n\nVasilis Ventirozos\n---------- Forwarded message ----------From: \"Armand du Plessis\" <[email protected]>Date: Apr 2, 2013 1:37 AMSubject: [PERFORM] Problems with pg_locks explosion\n\n\n\nTo: \"pgsql-performance\" <[email protected]>Cc: [Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]\n\nHi there,\nI'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem. \nWhat's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8) and switched to a new instance with a RAID-0 volume array. The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200. \nWithin a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres. \nThe instance still idles at this point. The only clue I could find was that it usually starts a few minutes after the checkpoint entries appear in my logs. \nAny suggestions would really be appreciated. It's killing our business at the moment. I can supply more info if required but pasted what I thought would be useful below. Not sure what else to change in the settings. \n\nKind regards,Armand\nIt's on Amazon EC2 - \n* cc2.8xlarge instance type * 6 volumes in RAID-0 configuration. (1000 PIOPS) \n60.5 GiB of memory\n88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)3370 GB of instance storage\n64-bit platformI/O Performance: Very High (10 Gigabit Ethernet)\nEBS-Optimized Available: No**API name: cc2.8xlarge\n\npostgresql.conffsync = offfull_page_writes = off\ndefault_statistics_target = 100maintenance_work_mem = 1GBcheckpoint_completion_target = 0.9\neffective_cache_size = 48GBwork_mem = 64MBwal_buffers = -1\n\ncheckpoint_segments = 128shared_buffers = 32GBmax_connections = 800\n\neffective_io_concurrency = 3 # Down from 6\n# - Background Writer -#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round\n$ free total used free shared buffers cached\nMem: 61368192 60988180 380012 0 784 44167172-/+ buffers/cache: 16820224 44547968\nSwap: 0 0 0$ top -c \ntop - 21:55:51 up 12 days, 12:41, 4 users, load average: 6.03, 16.10, 24.15top - 21:55:54 up 12 days, 12:41, 4 users, load average: 6.03, 15.94, 24.06\nTasks: 837 total, 6 running, 831 sleeping, 0 stopped, 0 zombieCpu(s): 15.7%us, 1.7%sy, 0.0%ni, 81.6%id, 0.3%wa, 0.0%hi, 0.6%si, 0.0%st\nMem: 61368192k total, 54820988k used, 6547204k free, 9032k buffer[ec2-user@ip-10-155-231-112 ~]$ sudo iostat\nLinux 3.2.39-6.88.amzn1.x86_64 () 04/01/2013 _x86_64_ (32 CPU)\navg-cpu: %user %nice %system %iowait %steal %idle 21.00 0.00 1.10 0.26 0.00 77.63\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtnxvda 0.21 5.00 2.22 5411830 2401368\nxvdk 98.32 1774.67 969.86 1919359965 1048932113xvdj 98.28 1773.68 969.14 1918288697 1048156776\nxvdi 98.29 1773.69 969.61 1918300250 1048662470xvdh 98.24 1773.92 967.54 1918544618 1046419936\nxvdg 98.27 1774.15 968.85 1918790636 1047842846xvdf 98.32 1775.56 968.69 1920316435 1047668172\nmd127 733.85 10645.68 5813.70 11513598393 6287682313What bugs me on this is the throughput percentage on the volumes in Cloudwatch is 100% on all volumes. \nThe problems seem to overlap with checkpoints. 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s\",,,,,,,,,\"\"\n2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"",
"msg_date": "Tue, 2 Apr 2013 01:15:01 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "Try these SQL statements , they will give you more information about whats\nhappening in your server lock-wise\n\nSELECT\nlocktype, virtualtransaction,transactionid,nspname,relname,mode,granted,\ncast(date_trunc('second',query_start) AS timestamp) AS query_start,\nsubstr(current_query,1,25) AS query\nFROM pg_locks\nLEFT OUTER JOIN pg_class ON (pg_locks.relation = pg_class.oid)\nLEFT OUTER JOIN pg_namespace ON (pg_namespace.oid = pg_class.\nrelnamespace), pg_stat_activity\nWHERE\nNOT pg_locks.pid=pg_backend_pid() AND pg_locks.pid=pg_stat_activity.procpid;\n\n\nSELECT\nlocked.pid AS locked_pid, locker.pid AS locker_pid, locked_act.usename AS\nlocked_user, locker_act.usename AS locker_user,\nlocked.virtualtransaction, locked.transactionid, locked.locktype\nFROM\npg_locks locked, pg_locks locker, pg_stat_activity locked_act,\npg_stat_activity locker_act\nWHERE\nlocker.granted=true AND locked.granted=false AND\nlocked.pid=locked_act.procpid AND\nlocker.pid=locker_act.procpid AND\n(locked.virtualtransaction=locker.virtualtransaction OR\nlocked.transactionid=locker.transactionid);\n\nSELECT\nlocked.pid AS locked_pid, locker.pid AS locker_pid, locked_act.usename AS\nlocked_user, locker_act.usename AS locker_user,\nlocked.virtualtransaction, locked.transactionid, relname\nFROM\npg_locks locked\nLEFT OUTER JOIN pg_class ON (locked.relation = pg_class.oid), pg_locks\nlocker,pg_stat_activity locked_act, pg_stat_activity locker_act\nWHERE\nlocker.granted=true AND locked.granted=false AND\nlocked.pid=locked_act.procpid AND locker.pid=locker_act.procpid AND\nlocked.relation=locker.relation;\n\nVasilis Ventirozos\n\nTry these SQL statements , they will give you more information about whats happening in your server lock-wiseSELECTlocktype, virtualtransaction,transactionid,nspname,relname,mode,granted,\ncast(date_trunc('second',query_start) AS timestamp) AS query_start,substr(current_query,1,25) AS queryFROM pg_locksLEFT OUTER JOIN pg_class ON (pg_locks.relation = pg_class.oid)LEFT OUTER JOIN pg_namespace ON (pg_namespace.oid = pg_class.\nrelnamespace), pg_stat_activityWHERENOT pg_locks.pid=pg_backend_pid() AND pg_locks.pid=pg_stat_activity.procpid;SELECTlocked.pid AS locked_pid, locker.pid AS locker_pid, locked_act.usename AS locked_user, locker_act.usename AS locker_user,\nlocked.virtualtransaction, locked.transactionid, locked.locktype FROMpg_locks locked, pg_locks locker, pg_stat_activity locked_act, pg_stat_activity locker_actWHERElocker.granted=true AND locked.granted=false AND locked.pid=locked_act.procpid AND\nlocker.pid=locker_act.procpid AND (locked.virtualtransaction=locker.virtualtransaction OR locked.transactionid=locker.transactionid);SELECTlocked.pid AS locked_pid, locker.pid AS locker_pid, locked_act.usename AS locked_user, locker_act.usename AS locker_user,\nlocked.virtualtransaction, locked.transactionid, relnameFROMpg_locks lockedLEFT OUTER JOIN pg_class ON (locked.relation = pg_class.oid), pg_locks locker,pg_stat_activity locked_act, pg_stat_activity locker_act\nWHERElocker.granted=true AND locked.granted=false AND locked.pid=locked_act.procpid AND locker.pid=locker_act.procpid AND locked.relation=locker.relation;Vasilis Ventirozos",
"msg_date": "Tue, 2 Apr 2013 02:43:11 +0300",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "In addition to tuning the various Postgres config knobs you may need to \nlook at how your AWS server is set up. If your load is causing an IO \nstall then *symptoms* of this will be lots of locks...\n\nYou have quite a lot of memory (60G), so look at tuning the \nvm.dirty_background_ratio, vm.dirty_ratio sysctls to avoid trying to \n*suddenly* write out many gigs of dirty buffers.\n\nYour provisioned volumes are much better than the default AWS ones, but \nare still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth of \nPostgres 8k buffers). So you may need to look at adding more volumes \ninto the array, or adding some separate ones and putting pg_xlog \ndirectory on 'em.\n\nHowever before making changes I would recommend using iostat or sar to \nmonitor how volumes are handling the load (I usually choose a 1 sec \ngranularity and look for 100% util and high - server hundred ms - \nawaits). Also iotop could be enlightening.\n\nRegards\n\nMark\n\nOn 02/04/13 11:35, Armand du Plessis wrote:\n>\n> It's on Amazon EC2 -\n> * cc2.8xlarge instance type\n> * 6 volumes in RAID-0 configuration. (1000 PIOPS)\n>\n> 60.5 GiB of memory\n> 88 EC2 Compute Units (2 x Intel Xeon E5-2670, eight-core)\n> 3370 GB of instance storage\n> 64-bit platform\n> I/O Performance: Very High (10 Gigabit Ethernet)\n> EBS-Optimized Available: No**\n> API name: cc2.8xlarge\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 02 Apr 2013 13:11:15 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "On Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <[email protected]> wrote:\n\n> [Apologies, I first sent this to the incorrect list, postgres-admin, in\n> the event you receive it twice]\n>\n> Hi there,\n>\n> I'm hoping someone on the list can shed some light on an issue I'm having\n> with our Postgresql cluster. I'm literally tearing out my hair and don't\n> have a deep enough understanding of Postgres to find the problem.\n>\n> What's happening is I had severe disk/io issues on our original Postgres\n> cluster (9.0.8)\n>\nand switched to a new instance with a RAID-0 volume array.\n>\n\nWhat was the old instance IO? Did you do IO benchmarking on both?\n\n\n> The machine's CPU usage would hover around 30% and our database would run\n> lightning fast with pg_locks hovering between 100-200.\n>\n> Within a few seconds something would trigger a massive increase in\n> pg_locks so that it suddenly shoots up to 4000-8000. At this point\n> everything dies. Queries that usually take a few milliseconds takes minutes\n> and everything is unresponsive until I restart postgres.\n>\n\nI think that pg_locks is pretty much a red herring. All it means is that\nyou have a lot more active connections than you used to. All active\nconnections are going to hold various locks, while most idle connections\n(other than 'idle in transaction') connections will not hold any.\n\nAlthough I doubt it will solve this particular problem, you should probably\nuse a connection pooler.\n\n\n\n> shared_buffers = 32GB\n>\n\nThat seems very high. There are reports that using >8 GB leads to\nprecisely the type of problem you are seeing (checkpoint associated\nfreezes). Although I've never seen those reports when fsync=off.\n\nI thought you might be suffering from the problem solved in release 9.1 by\nitem \"Merge duplicate fsync requests (Robert Haas, Greg Smith)\", but then I\nrealized that with fsync=off it could not be that.\n\n\n\n>\n> max_connections = 800\n>\n\nThat also is very high.\n\n\n> The problems seem to overlap with checkpoints.\n>\n> 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35\n> UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n> 2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35\n> UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0\n> transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s,\n> sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000\n> s\",,,,,,,,,\"\"\n> 2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35\n> UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n>\n\n\nI think you changed checkpoint_timout from default (5 min) to 10 minutes,\nwithout telling us. Anyway, this is where it would be nice to know how\nmuch of the 539.439 s in the write phase was spent blocking on writes, and\nhow much was spent napping. But that info is not collected by pgsql.\n\nYour top output looked for it was a time at which there were no problems,\nand it didn't include the top processes, so it wasn't very informative.\n\nIf you could upgrade to 9.2 and capture some data with track_io_timing,\nthat could be useful.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <[email protected]> wrote:\n[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]\nHi there,\nI'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem. \nWhat's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8) \n\nand switched to a new instance with a RAID-0 volume array.\nWhat was the old instance IO? Did you do IO benchmarking on both? \n The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200. \nWithin a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres. \nI think that pg_locks is pretty much a red herring. All it means is that you have a lot more active connections than you used to. All active connections are going to hold various locks, while most idle connections (other than 'idle in transaction') connections will not hold any.\nAlthough I doubt it will solve this particular problem, you should probably use a connection pooler.\n\nshared_buffers = 32GBThat seems very high. There are reports that using >8 GB leads to precisely the type of problem you are seeing (checkpoint associated freezes). Although I've never seen those reports when fsync=off. \nI thought you might be suffering from the problem solved in release 9.1 by item \"Merge duplicate fsync requests (Robert Haas, Greg Smith)\", but then I realized that with fsync=off it could not be that.\n \nmax_connections = 800\nThat also is very high.\nThe problems seem to overlap with checkpoints. 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s\",,,,,,,,,\"\"\n2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\nI think you changed checkpoint_timout from default (5 min) to 10 minutes, without telling us. Anyway, this is where it would be nice to know how much of the 539.439 s in the write phase was spent blocking on writes, and how much was spent napping. But that info is not collected by pgsql.\nYour top output looked for it was a time at which there were no problems, and it didn't include the top processes, so it wasn't very informative. If you could upgrade to 9.2 and capture some data with track_io_timing, that could be useful.\nCheers,Jeff",
"msg_date": "Mon, 1 Apr 2013 17:21:16 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "Thanks Mark,\n\nI had a look at the iostat output (on a 5s interval) and pasted it below.\nThe utilization and waits seems low. Included a sample below, #1 taken\nduring normal operation and then when the locks happen it basically drops\nto 0 across the board. My (mis)understanding of the IOPS was that it would\nbe 1000 IOPS per/volume and when in RAID0 should give me quite a bit higher\nthroughput than in a single EBS volume setup. (My naive envelop calculation\nwas #volumes * PIOPS = Effective IOPS :/)\n\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nxvda 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00\nxvdk 0.00 0.00 141.60 0.00 5084.80 0.00 35.91\n 0.43 3.06 0.51 7.28\nxvdj 0.00 0.00 140.40 0.40 4614.40 24.00 32.94\n 0.49 3.45 0.52 7.28\nxvdi 0.00 0.00 123.00 2.00 4019.20 163.20 33.46\n 0.33 2.63 0.68 8.48\nxvdh 0.00 0.00 139.80 0.80 4787.20 67.20 34.53\n 0.52 3.73 0.55 7.68\nxvdg 0.00 0.00 143.80 0.20 4804.80 16.00 33.48\n 0.86 6.03 0.72 10.40\nxvdf 0.00 0.00 146.40 0.00 4758.40 0.00 32.50\n 0.55 3.76 0.55 8.00\nmd127 0.00 0.00 831.20 3.40 27867.20 270.40 33.71\n 0.00 0.00 0.00 0.00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 0.00 0.00 100.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nxvda 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00\nxvdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00\nxvdj 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00\nxvdi 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00\nxvdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00\nxvdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00\nxvdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00\nmd127 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00\n\nIt only spikes to 100% util when the server restarts. What bugs me though\nis Cloud Metrics show 100% Throughput on all the volumes despite the output\nabove.\n\nI'm looking into vm.dirty_background_ratio, vm.dirty_ratio sysctls. Is\nthere any guidance or links available that would be useful as a starting\npoint?\n\nThanks again for the help, I really appreciate it.\n\nRegards,\n\nArmand\n\nOn Tue, Apr 2, 2013 at 2:11 AM, Mark Kirkwood <[email protected]\n> wrote:\n\n> In addition to tuning the various Postgres config knobs you may need to\n> look at how your AWS server is set up. If your load is causing an IO stall\n> then *symptoms* of this will be lots of locks...\n>\n> You have quite a lot of memory (60G), so look at tuning the\n> vm.dirty_background_ratio, vm.dirty_ratio sysctls to avoid trying to\n> *suddenly* write out many gigs of dirty buffers.\n>\n> Your provisioned volumes are much better than the default AWS ones, but\n> are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth of Postgres\n> 8k buffers). So you may need to look at adding more volumes into the array,\n> or adding some separate ones and putting pg_xlog directory on 'em.\n>\n> However before making changes I would recommend using iostat or sar to\n> monitor how volumes are handling the load (I usually choose a 1 sec\n> granularity and look for 100% util and high - server hundred ms - awaits).\n> Also iotop could be enlightening.\n>\n> Regards\n>\n> Mark\n\nThanks Mark, I had a look at the iostat output (on a 5s interval) and pasted it below. The utilization and waits seems low. Included a sample below, #1 taken during normal operation and then when the locks happen it basically drops to 0 across the board. My (mis)understanding of the IOPS was that it would be 1000 IOPS per/volume and when in RAID0 should give me quite a bit higher throughput than in a single EBS volume setup. (My naive envelop calculation was #volumes * PIOPS = Effective IOPS :/)\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilxvda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdk 0.00 0.00 141.60 0.00 5084.80 0.00 35.91 0.43 3.06 0.51 7.28xvdj 0.00 0.00 140.40 0.40 4614.40 24.00 32.94 0.49 3.45 0.52 7.28\nxvdi 0.00 0.00 123.00 2.00 4019.20 163.20 33.46 0.33 2.63 0.68 8.48xvdh 0.00 0.00 139.80 0.80 4787.20 67.20 34.53 0.52 3.73 0.55 7.68\nxvdg 0.00 0.00 143.80 0.20 4804.80 16.00 33.48 0.86 6.03 0.72 10.40xvdf 0.00 0.00 146.40 0.00 4758.40 0.00 32.50 0.55 3.76 0.55 8.00\nmd127 0.00 0.00 831.20 3.40 27867.20 270.40 33.71 0.00 0.00 0.00 0.00\navg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 100.00 0.00 0.00 0.00\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %utilxvda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00xvdj 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdi 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00xvdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nxvdg 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00xvdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nmd127 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n\nIt only spikes to 100% util when the server restarts. What bugs me though is Cloud Metrics show 100% Throughput on all the volumes despite the output above. \n\nI'm looking into vm.dirty_background_ratio, vm.dirty_ratio sysctls. Is there any guidance or links available that would be useful as a starting point? \n\nThanks again for the help, I really appreciate it. Regards,Armand\nOn Tue, Apr 2, 2013 at 2:11 AM, Mark Kirkwood <[email protected]> wrote:\n\nIn addition to tuning the various Postgres config knobs you may need to look at how your AWS server is set up. If your load is causing an IO stall then *symptoms* of this will be lots of locks...\n\nYou have quite a lot of memory (60G), so look at tuning the vm.dirty_background_ratio, vm.dirty_ratio sysctls to avoid trying to *suddenly* write out many gigs of dirty buffers.\n\nYour provisioned volumes are much better than the default AWS ones, but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth of Postgres 8k buffers). So you may need to look at adding more volumes into the array, or adding some separate ones and putting pg_xlog directory on 'em.\n\nHowever before making changes I would recommend using iostat or sar to monitor how volumes are handling the load (I usually choose a 1 sec granularity and look for 100% util and high - server hundred ms - awaits). Also iotop could be enlightening.\n\nRegards\n\nMark",
"msg_date": "Tue, 2 Apr 2013 02:31:35 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Problems with pg_locks explosion"
},
{
"msg_contents": "Hi Jeff,\n\nSorry I should've mentioned the new instance is Postgres 9.2.3. The old\ninstance IO maxed out the disk/io available on a single EBS volume on AWS.\nIt had 2000 PIOPS but was constantly bottlenecked. I assumed that striping\n6 1000 IOPS volumes in RAID-0 would give me some breathing space on that\nfront, and looking at the iostat (just included in previous email) it seems\nto be doing OK.\n\nI actually had pg_pool running as a test but to avoid having too many\nmoving parts in the change removed it from the equation. Need to look into\nthe proper configuration so it doesn't saturate my cluster worse than I'm\ndoing myself.\n\nI've commented inline.\n\nRegards,\n\nArmand\n\nPS. This is probably the most helpful mailing list I've ever come across.\nStarting to feel a little more that it can be solved.\n\n\nOn Tue, Apr 2, 2013 at 2:21 AM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <[email protected]> wrote:\n>\n>> [Apologies, I first sent this to the incorrect list, postgres-admin, in\n>> the event you receive it twice]\n>>\n>> Hi there,\n>>\n>> I'm hoping someone on the list can shed some light on an issue I'm having\n>> with our Postgresql cluster. I'm literally tearing out my hair and don't\n>> have a deep enough understanding of Postgres to find the problem.\n>>\n>> What's happening is I had severe disk/io issues on our original Postgres\n>> cluster (9.0.8)\n>>\n> and switched to a new instance with a RAID-0 volume array.\n>>\n>\n> What was the old instance IO? Did you do IO benchmarking on both?\n>\n>\n>> The machine's CPU usage would hover around 30% and our database would\n>> run lightning fast with pg_locks hovering between 100-200.\n>>\n>> Within a few seconds something would trigger a massive increase in\n>> pg_locks so that it suddenly shoots up to 4000-8000. At this point\n>> everything dies. Queries that usually take a few milliseconds takes minutes\n>> and everything is unresponsive until I restart postgres.\n>>\n>\n> I think that pg_locks is pretty much a red herring. All it means is that\n> you have a lot more active connections than you used to. All active\n> connections are going to hold various locks, while most idle connections\n> (other than 'idle in transaction') connections will not hold any.\n>\n> Although I doubt it will solve this particular problem, you should\n> probably use a connection pooler.\n>\n>\n>\n>> shared_buffers = 32GB\n>>\n>\n> That seems very high. There are reports that using >8 GB leads to\n> precisely the type of problem you are seeing (checkpoint associated\n> freezes). Although I've never seen those reports when fsync=off.\n>\n> I thought you might be suffering from the problem solved in release 9.1 by\n> item \"Merge duplicate fsync requests (Robert Haas, Greg Smith)\", but then I\n> realized that with fsync=off it could not be that.\n>\n>\n>\n>>\n>> max_connections = 800\n>>\n>\n> That also is very high.\n>\n>\n>> The problems seem to overlap with checkpoints.\n>>\n>> 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35\n>> UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n>> 2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35\n>> UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0\n>> transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s,\n>> sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000\n>> s\",,,,,,,,,\"\"\n>> 2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35\n>> UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n>>\n>\n>\n> I think you changed checkpoint_timout from default (5 min) to 10 minutes,\n> without telling us. Anyway, this is where it would be nice to know how\n> much of the 539.439 s in the write phase was spent blocking on writes, and\n> how much was spent napping. But that info is not collected by pgsql.\n>\n\nI did actually change it to 25 minutes. Apologies it was probably lost in\nthe text of a previous email. Here's the changed settings:\n\n# - Background Writer -\n\nbgwriter_delay = 200ms # 10-10000ms between rounds\nbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/round\nbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\nscanned/round\n\ncheckpoint_segments = 128\ncheckpoint_timeout = 25min\n\nIt seems to be lasting longer with these settings.\n\n\n>\n> Your top output looked for it was a time at which there were no problems,\n> and it didn't include the top processes, so it wasn't very informative.\n>\n> If you could upgrade to 9.2 and capture some data with track_io_timing,\n> that could be useful.\n>\n\nI'm looking into track_io_timing.\n\n\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi Jeff, Sorry I should've mentioned the new instance is Postgres 9.2.3. The old instance IO maxed out the disk/io available on a single EBS volume on AWS. It had 2000 PIOPS but was constantly bottlenecked. I assumed that striping 6 1000 IOPS volumes in RAID-0 would give me some breathing space on that front, and looking at the iostat (just included in previous email) it seems to be doing OK. \nI actually had pg_pool running as a test but to avoid having too many moving parts in the change removed it from the equation. Need to look into the proper configuration so it doesn't saturate my cluster worse than I'm doing myself.\nI've commented inline. Regards,ArmandPS. This is probably the most helpful mailing list I've ever come across. Starting to feel a little more that it can be solved. \nOn Tue, Apr 2, 2013 at 2:21 AM, Jeff Janes <[email protected]> wrote:\nOn Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <[email protected]> wrote:\n\n[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]\nHi there,\nI'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem. \nWhat's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8) \n\nand switched to a new instance with a RAID-0 volume array.\nWhat was the old instance IO? Did you do IO benchmarking on both? \n The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200. \nWithin a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres. \nI think that pg_locks is pretty much a red herring. All it means is that you have a lot more active connections than you used to. All active connections are going to hold various locks, while most idle connections (other than 'idle in transaction') connections will not hold any.\nAlthough I doubt it will solve this particular problem, you should probably use a connection pooler.\n\nshared_buffers = 32GBThat seems very high. There are reports that using >8 GB leads to precisely the type of problem you are seeing (checkpoint associated freezes). Although I've never seen those reports when fsync=off. \nI thought you might be suffering from the problem solved in release 9.1 by item \"Merge duplicate fsync requests (Robert Haas, Greg Smith)\", but then I realized that with fsync=off it could not be that.\n \nmax_connections = 800\nThat also is very high.\nThe problems seem to overlap with checkpoints. 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s\",,,,,,,,,\"\"\n2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\nI think you changed checkpoint_timout from default (5 min) to 10 minutes, without telling us. Anyway, this is where it would be nice to know how much of the 539.439 s in the write phase was spent blocking on writes, and how much was spent napping. But that info is not collected by pgsql.\nI did actually change it to 25 minutes. Apologies it was probably lost in the text of a previous email. Here's the changed settings:\n\n# - Background Writer -bgwriter_delay = 200ms # 10-10000ms between rounds\nbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/roundbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round\ncheckpoint_segments = 128checkpoint_timeout = 25min \nIt seems to be lasting longer with these settings. \n\nYour top output looked for it was a time at which there were no problems, and it didn't include the top processes, so it wasn't very informative. If you could upgrade to 9.2 and capture some data with track_io_timing, that could be useful.\nI'm looking into track_io_timing. \n\nCheers,Jeff",
"msg_date": "Tue, 2 Apr 2013 02:40:41 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "I've run an EXPLAIN ANALYZE on one of the queries that appeared in the\npg_locks (although like you say that might be a red herring) both during\nnormal response times (2) and also after the locks backlog materialized (1)\n\nThe output below, I've just blanked out some columns. The IO timings do\nseem an order of magnitude slower but not excessive unless I'm reading it\nwrong.\n\n\"Limit (cost=2364.19..2365.44 rows=500 width=177) (actual\ntime=6501.103..6507.196 rows=500 loops=1)\"\n\" Output:\n\" Buffers: shared hit=7163 read=137\"\n\" I/O Timings: read=107.771\"\n\" -> Sort (cost=2364.19..2365.56 rows=549 width=177) (actual\ntime=6501.095..6503.216 rows=500 loops=1)\"\n\" Output:\n\" Sort Key: messages.created_at\"\n\" Sort Method: quicksort Memory: 294kB\"\n\" Buffers: shared hit=7163 read=137\"\n\" I/O Timings: read=107.771\"\n\" -> Nested Loop (cost=181.19..2339.21 rows=549 width=177) (actual\ntime=6344.410..6495.377 rows=783 loops=1)\"\n\" Output:\n\" Buffers: shared hit=7160 read=137\"\n\" I/O Timings: read=107.771\"\n\" -> Nested Loop (cost=181.19..1568.99 rows=549 width=177)\n(actual time=6344.389..6470.549 rows=783 loops=1)\"\n\" Output:\n\" Buffers: shared hit=3931 read=137\"\n\" I/O Timings: read=107.771\"\n\" -> Bitmap Heap Scan on public.messages\n (cost=181.19..798.78 rows=549 width=177) (actual time=6344.342..6436.117\nrows=783 loops=1)\"\n\" Output:\n\" Recheck Cond:\n\" Buffers: shared hit=707 read=137\"\n\" I/O Timings: read=107.771\"\n\" -> BitmapOr (cost=181.19..181.19 rows=549\nwidth=0) (actual time=6344.226..6344.226 rows=0 loops=1)\"\n\" Buffers: shared hit=120 read=20\"\n\" I/O Timings: read=37.085\"\n\" -> Bitmap Index Scan on\nmessages_sender_type_created_at_idx (cost=0.00..23.41 rows=309 width=0)\n(actual time=6343.358..6343.358 rows=366 loops=1)\"\n\" Index Cond:\n\" Buffers: shared hit=26 read=15\"\n\" I/O Timings: read=36.977\"\n\" -> Bitmap Index Scan on\nmessages_type_sender_recipient_created_at\n\" Buffers: shared hit=94 read=5\"\n\" I/O Timings: read=0.108\"\n\" -> Index Only Scan using profiles_pkey on\npublic.profiles (cost=0.00..1.39 rows=1 width=4) (actual time=0.018..0.024\nrows=1 loops=783)\"\n\" Output: profiles.id\"\n\" Index Cond: (profiles.id = messages.sender)\"\n\" Heap Fetches: 661\"\n\" Buffers: shared hit=3224\"\n\" -> Index Only Scan using profiles_pkey on public.profiles\nrecipient_profiles_messages (cost=0.00..1.39 rows=1 width=4) (actual\ntime=0.014..0.018 rows=1 loops=783)\"\n\" Output: recipient_profiles_messages.id\"\n\" Index Cond: (recipient_profiles_messages.id =\nmessages.recipient)\"\n\" Heap Fetches: 667\"\n\" Buffers: shared hit=3229\"\n\"Total runtime: 6509.328 ms\"\n\n\n\n\"Limit (cost=2366.57..2367.82 rows=500 width=177) (actual\ntime=73.284..76.296 rows=500 loops=1)\"\n\" Output: various columns\"\n\" Buffers: shared hit=6738 read=562\"\n\" I/O Timings: read=19.212\"\n\" -> Sort (cost=2366.57..2367.94 rows=549 width=177) (actual\ntime=73.276..74.300 rows=500 loops=1)\"\n\" Output: various columns\"\n\" Sort Key: messages.created_at\"\n\" Sort Method: quicksort Memory: 294kB\"\n\" Buffers: shared hit=6738 read=562\"\n\" I/O Timings: read=19.212\"\n\" -> Nested Loop (cost=181.19..2341.59 rows=549 width=177) (actual\ntime=3.556..69.866 rows=783 loops=1)\"\n\" Output: various columns\n\" Buffers: shared hit=6735 read=562\"\n\" I/O Timings: read=19.212\"\n\" -> Nested Loop (cost=181.19..1570.19 rows=549 width=177)\n(actual time=3.497..53.820 rows=783 loops=1)\"\n\" Output: various columns\n\" Buffers: shared hit=3506 read=562\"\n\" I/O Timings: read=19.212\"\n\" -> Bitmap Heap Scan on public.messages\n (cost=181.19..798.78 rows=549 width=177) (actual time=3.408..32.906\nrows=783 loops=1)\"\n\" Output: various columns\n\" Recheck Cond: ()\n\" Buffers: shared hit=282 read=562\"\n\" I/O Timings: read=19.212\"\n\" -> BitmapOr (cost=181.19..181.19 rows=549\nwidth=0) (actual time=3.279..3.279 rows=0 loops=1)\"\n\" Buffers: shared hit=114 read=26\"\n\" I/O Timings: read=1.755\"\n\" -> Bitmap Index Scan on\nmessages_sender_type_created_at_idx (cost=0.00..23.41 rows=309 width=0)\n(actual time=1.882..1.882 rows=366 loops=1)\"\n\" Index Cond:\n\" Buffers: shared hit=25 read=16\"\n\" I/O Timings: read=1.085\"\n\" -> Bitmap Index Scan on\n\" Buffers: shared hit=89 read=10\"\n\" I/O Timings: read=0.670\"\n\" -> Index Only Scan using profiles_pkey on\npublic.profiles (cost=0.00..1.40 rows=1 width=4) (actual time=0.012..0.015\nrows=1 loops=783)\"\n\" Output: profiles.id\"\n\" Index Cond: (profiles.id = messages.sender)\"\n\" Heap Fetches: 654\"\n\" Buffers: shared hit=3224\"\n\" -> Index Only Scan using profiles_pkey on public.profiles\nrecipient_profiles_messages (cost=0.00..1.40 rows=1 width=4) (actual\ntime=0.007..0.009 rows=1 loops=783)\"\n\" Output: recipient_profiles_messages.id\"\n\" Index Cond: (recipient_profiles_messages.id =\nmessages.recipient)\"\n\" Heap Fetches: 647\"\n\" Buffers: shared hit=3229\"\n\"Total runtime: 77.528 ms\"\n\n\n\nOn Tue, Apr 2, 2013 at 2:40 AM, Armand du Plessis <[email protected]> wrote:\n\n> Hi Jeff,\n>\n> Sorry I should've mentioned the new instance is Postgres 9.2.3. The old\n> instance IO maxed out the disk/io available on a single EBS volume on AWS.\n> It had 2000 PIOPS but was constantly bottlenecked. I assumed that striping\n> 6 1000 IOPS volumes in RAID-0 would give me some breathing space on that\n> front, and looking at the iostat (just included in previous email) it seems\n> to be doing OK.\n>\n> I actually had pg_pool running as a test but to avoid having too many\n> moving parts in the change removed it from the equation. Need to look into\n> the proper configuration so it doesn't saturate my cluster worse than I'm\n> doing myself.\n>\n> I've commented inline.\n>\n> Regards,\n>\n> Armand\n>\n> PS. This is probably the most helpful mailing list I've ever come across.\n> Starting to feel a little more that it can be solved.\n>\n>\n> On Tue, Apr 2, 2013 at 2:21 AM, Jeff Janes <[email protected]> wrote:\n>\n>> On Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <[email protected]> wrote:\n>>\n>>> [Apologies, I first sent this to the incorrect list, postgres-admin, in\n>>> the event you receive it twice]\n>>>\n>>> Hi there,\n>>>\n>>> I'm hoping someone on the list can shed some light on an issue I'm\n>>> having with our Postgresql cluster. I'm literally tearing out my hair and\n>>> don't have a deep enough understanding of Postgres to find the problem.\n>>>\n>>> What's happening is I had severe disk/io issues on our original Postgres\n>>> cluster (9.0.8)\n>>>\n>> and switched to a new instance with a RAID-0 volume array.\n>>>\n>>\n>> What was the old instance IO? Did you do IO benchmarking on both?\n>>\n>>\n>>> The machine's CPU usage would hover around 30% and our database would\n>>> run lightning fast with pg_locks hovering between 100-200.\n>>>\n>>> Within a few seconds something would trigger a massive increase in\n>>> pg_locks so that it suddenly shoots up to 4000-8000. At this point\n>>> everything dies. Queries that usually take a few milliseconds takes minutes\n>>> and everything is unresponsive until I restart postgres.\n>>>\n>>\n>> I think that pg_locks is pretty much a red herring. All it means is that\n>> you have a lot more active connections than you used to. All active\n>> connections are going to hold various locks, while most idle connections\n>> (other than 'idle in transaction') connections will not hold any.\n>>\n>> Although I doubt it will solve this particular problem, you should\n>> probably use a connection pooler.\n>>\n>>\n>>\n>>> shared_buffers = 32GB\n>>>\n>>\n>> That seems very high. There are reports that using >8 GB leads to\n>> precisely the type of problem you are seeing (checkpoint associated\n>> freezes). Although I've never seen those reports when fsync=off.\n>>\n>> I thought you might be suffering from the problem solved in release 9.1\n>> by item \"Merge duplicate fsync requests (Robert Haas, Greg Smith)\", but\n>> then I realized that with fsync=off it could not be that.\n>>\n>>\n>>\n>>>\n>>> max_connections = 800\n>>>\n>>\n>> That also is very high.\n>>\n>>\n>>> The problems seem to overlap with checkpoints.\n>>>\n>>> 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01\n>>> 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n>>> 2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01\n>>> 21:21:35 UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers\n>>> (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled;\n>>> write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000\n>>> s, average=0.000 s\",,,,,,,,,\"\"\n>>> 2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01\n>>> 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n>>>\n>>\n>>\n>> I think you changed checkpoint_timout from default (5 min) to 10 minutes,\n>> without telling us. Anyway, this is where it would be nice to know how\n>> much of the 539.439 s in the write phase was spent blocking on writes, and\n>> how much was spent napping. But that info is not collected by pgsql.\n>>\n>\n> I did actually change it to 25 minutes. Apologies it was probably lost in\n> the text of a previous email. Here's the changed settings:\n>\n> # - Background Writer -\n>\n> bgwriter_delay = 200ms # 10-10000ms between rounds\n> bgwriter_lru_maxpages = 400 # 0-1000 max buffers written/round\n> bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\n> scanned/round\n>\n> checkpoint_segments = 128\n> checkpoint_timeout = 25min\n>\n> It seems to be lasting longer with these settings.\n>\n>\n>>\n>> Your top output looked for it was a time at which there were no problems,\n>> and it didn't include the top processes, so it wasn't very informative.\n>>\n>> If you could upgrade to 9.2 and capture some data with track_io_timing,\n>> that could be useful.\n>>\n>\n> I'm looking into track_io_timing.\n>\n>\n>>\n>> Cheers,\n>>\n>> Jeff\n>>\n>\n>\n\nI've run an EXPLAIN ANALYZE on one of the queries that appeared in the pg_locks (although like you say that might be a red herring) both during normal response times (2) and also after the locks backlog materialized (1) \nThe output below, I've just blanked out some columns. The IO timings do seem an order of magnitude slower but not excessive unless I'm reading it wrong. \"Limit (cost=2364.19..2365.44 rows=500 width=177) (actual time=6501.103..6507.196 rows=500 loops=1)\"\n\" Output: \" Buffers: shared hit=7163 read=137\"\" I/O Timings: read=107.771\"\n\" -> Sort (cost=2364.19..2365.56 rows=549 width=177) (actual time=6501.095..6503.216 rows=500 loops=1)\"\" Output: \n\" Sort Key: messages.created_at\"\" Sort Method: quicksort Memory: 294kB\"\" Buffers: shared hit=7163 read=137\"\n\" I/O Timings: read=107.771\"\" -> Nested Loop (cost=181.19..2339.21 rows=549 width=177) (actual time=6344.410..6495.377 rows=783 loops=1)\"\n\" Output: \" Buffers: shared hit=7160 read=137\"\" I/O Timings: read=107.771\"\n\" -> Nested Loop (cost=181.19..1568.99 rows=549 width=177) (actual time=6344.389..6470.549 rows=783 loops=1)\"\" Output: \n\" Buffers: shared hit=3931 read=137\"\" I/O Timings: read=107.771\"\n\" -> Bitmap Heap Scan on public.messages (cost=181.19..798.78 rows=549 width=177) (actual time=6344.342..6436.117 rows=783 loops=1)\"\n\n\" Output: \" Recheck Cond: \" Buffers: shared hit=707 read=137\"\n\" I/O Timings: read=107.771\"\" -> BitmapOr (cost=181.19..181.19 rows=549 width=0) (actual time=6344.226..6344.226 rows=0 loops=1)\"\n\" Buffers: shared hit=120 read=20\"\" I/O Timings: read=37.085\"\n\" -> Bitmap Index Scan on messages_sender_type_created_at_idx (cost=0.00..23.41 rows=309 width=0) (actual time=6343.358..6343.358 rows=366 loops=1)\"\n\" Index Cond: \" Buffers: shared hit=26 read=15\"\n\" I/O Timings: read=36.977\"\" -> Bitmap Index Scan on messages_type_sender_recipient_created_at \n\" Buffers: shared hit=94 read=5\"\" I/O Timings: read=0.108\"\n\" -> Index Only Scan using profiles_pkey on public.profiles (cost=0.00..1.39 rows=1 width=4) (actual time=0.018..0.024 rows=1 loops=783)\"\n\" Output: profiles.id\"\" Index Cond: (profiles.id = messages.sender)\"\n\" Heap Fetches: 661\"\" Buffers: shared hit=3224\"\n\" -> Index Only Scan using profiles_pkey on public.profiles recipient_profiles_messages (cost=0.00..1.39 rows=1 width=4) (actual time=0.014..0.018 rows=1 loops=783)\"\n\" Output: recipient_profiles_messages.id\"\" Index Cond: (recipient_profiles_messages.id = messages.recipient)\"\n\" Heap Fetches: 667\"\" Buffers: shared hit=3229\"\"Total runtime: 6509.328 ms\"\n\"Limit (cost=2366.57..2367.82 rows=500 width=177) (actual time=73.284..76.296 rows=500 loops=1)\"\n\" Output: various columns\"\" Buffers: shared hit=6738 read=562\"\" I/O Timings: read=19.212\"\n\" -> Sort (cost=2366.57..2367.94 rows=549 width=177) (actual time=73.276..74.300 rows=500 loops=1)\"\" Output: various columns\"\n\" Sort Key: messages.created_at\"\" Sort Method: quicksort Memory: 294kB\"\" Buffers: shared hit=6738 read=562\"\n\" I/O Timings: read=19.212\"\" -> Nested Loop (cost=181.19..2341.59 rows=549 width=177) (actual time=3.556..69.866 rows=783 loops=1)\"\n\" Output: various columns\" Buffers: shared hit=6735 read=562\"\" I/O Timings: read=19.212\"\n\" -> Nested Loop (cost=181.19..1570.19 rows=549 width=177) (actual time=3.497..53.820 rows=783 loops=1)\"\" Output: various columns\n\" Buffers: shared hit=3506 read=562\"\" I/O Timings: read=19.212\"\n\" -> Bitmap Heap Scan on public.messages (cost=181.19..798.78 rows=549 width=177) (actual time=3.408..32.906 rows=783 loops=1)\"\n\" Output: various columns\n\" Recheck Cond: ()\" Buffers: shared hit=282 read=562\"\n\" I/O Timings: read=19.212\"\" -> BitmapOr (cost=181.19..181.19 rows=549 width=0) (actual time=3.279..3.279 rows=0 loops=1)\"\n\" Buffers: shared hit=114 read=26\"\" I/O Timings: read=1.755\"\n\" -> Bitmap Index Scan on messages_sender_type_created_at_idx (cost=0.00..23.41 rows=309 width=0) (actual time=1.882..1.882 rows=366 loops=1)\"\n\" Index Cond: \" Buffers: shared hit=25 read=16\"\n\" I/O Timings: read=1.085\"\" -> Bitmap Index Scan on \n\" Buffers: shared hit=89 read=10\"\" I/O Timings: read=0.670\"\n\" -> Index Only Scan using profiles_pkey on public.profiles (cost=0.00..1.40 rows=1 width=4) (actual time=0.012..0.015 rows=1 loops=783)\"\n\" Output: profiles.id\"\" Index Cond: (profiles.id = messages.sender)\"\n\" Heap Fetches: 654\"\" Buffers: shared hit=3224\"\n\" -> Index Only Scan using profiles_pkey on public.profiles recipient_profiles_messages (cost=0.00..1.40 rows=1 width=4) (actual time=0.007..0.009 rows=1 loops=783)\"\n\" Output: recipient_profiles_messages.id\"\" Index Cond: (recipient_profiles_messages.id = messages.recipient)\"\n\" Heap Fetches: 647\"\" Buffers: shared hit=3229\"\"Total runtime: 77.528 ms\"\nOn Tue, Apr 2, 2013 at 2:40 AM, Armand du Plessis <[email protected]> wrote:\nHi Jeff, Sorry I should've mentioned the new instance is Postgres 9.2.3. The old instance IO maxed out the disk/io available on a single EBS volume on AWS. It had 2000 PIOPS but was constantly bottlenecked. I assumed that striping 6 1000 IOPS volumes in RAID-0 would give me some breathing space on that front, and looking at the iostat (just included in previous email) it seems to be doing OK. \nI actually had pg_pool running as a test but to avoid having too many moving parts in the change removed it from the equation. Need to look into the proper configuration so it doesn't saturate my cluster worse than I'm doing myself.\nI've commented inline. Regards,ArmandPS. This is probably the most helpful mailing list I've ever come across. Starting to feel a little more that it can be solved. \nOn Tue, Apr 2, 2013 at 2:21 AM, Jeff Janes <[email protected]> wrote:\nOn Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <[email protected]> wrote:\n\n[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]\nHi there,\nI'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem. \nWhat's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8) \n\nand switched to a new instance with a RAID-0 volume array.\nWhat was the old instance IO? Did you do IO benchmarking on both? \n The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200. \nWithin a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres. \nI think that pg_locks is pretty much a red herring. All it means is that you have a lot more active connections than you used to. All active connections are going to hold various locks, while most idle connections (other than 'idle in transaction') connections will not hold any.\nAlthough I doubt it will solve this particular problem, you should probably use a connection pooler.\n\nshared_buffers = 32GBThat seems very high. There are reports that using >8 GB leads to precisely the type of problem you are seeing (checkpoint associated freezes). Although I've never seen those reports when fsync=off. \nI thought you might be suffering from the problem solved in release 9.1 by item \"Merge duplicate fsync requests (Robert Haas, Greg Smith)\", but then I realized that with fsync=off it could not be that.\n \nmax_connections = 800\nThat also is very high.\nThe problems seem to overlap with checkpoints. 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s\",,,,,,,,,\"\"\n2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\nI think you changed checkpoint_timout from default (5 min) to 10 minutes, without telling us. Anyway, this is where it would be nice to know how much of the 539.439 s in the write phase was spent blocking on writes, and how much was spent napping. But that info is not collected by pgsql.\nI did actually change it to 25 minutes. Apologies it was probably lost in the text of a previous email. Here's the changed settings:\n\n# - Background Writer -bgwriter_delay = 200ms # 10-10000ms between rounds\nbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/round\nbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round\ncheckpoint_segments = 128\ncheckpoint_timeout = 25min \nIt seems to be lasting longer with these settings. \n\nYour top output looked for it was a time at which there were no problems, and it didn't include the top processes, so it wasn't very informative. If you could upgrade to 9.2 and capture some data with track_io_timing, that could be useful.\nI'm looking into track_io_timing. \n\nCheers,Jeff",
"msg_date": "Tue, 2 Apr 2013 02:59:09 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "Armand,\n\nAll of the symptoms you describe line up perfectly with a problem I had\nrecently when upgrading DB hardware.\nEverything ran find until we hit some threshold somewhere at which point\nthe locks would pile up in the thousands just as you describe, all while we\nwere not I/O bound.\n\nI was moving from a DELL 810 that used a flex memory bridge to a DELL 820\nthat used round robin on their quad core intels.\n(Interestingly we also found out that DELL is planning on rolling back to\nthe flex memory bridge later this year.)\n\nAny chance you could find out if your old processors might have been using\nflex while you're new processors might be using round robin?\n\n-s\n\n\nOn Mon, Apr 1, 2013 at 5:59 PM, Armand du Plessis <[email protected]> wrote:\n\n> I've run an EXPLAIN ANALYZE on one of the queries that appeared in the\n> pg_locks (although like you say that might be a red herring) both during\n> normal response times (2) and also after the locks backlog materialized (1)\n>\n> The output below, I've just blanked out some columns. The IO timings do\n> seem an order of magnitude slower but not excessive unless I'm reading it\n> wrong.\n>\n> \"Limit (cost=2364.19..2365.44 rows=500 width=177) (actual\n> time=6501.103..6507.196 rows=500 loops=1)\"\n> \" Output:\n> \" Buffers: shared hit=7163 read=137\"\n> \" I/O Timings: read=107.771\"\n> \" -> Sort (cost=2364.19..2365.56 rows=549 width=177) (actual\n> time=6501.095..6503.216 rows=500 loops=1)\"\n> \" Output:\n> \" Sort Key: messages.created_at\"\n> \" Sort Method: quicksort Memory: 294kB\"\n> \" Buffers: shared hit=7163 read=137\"\n> \" I/O Timings: read=107.771\"\n> \" -> Nested Loop (cost=181.19..2339.21 rows=549 width=177)\n> (actual time=6344.410..6495.377 rows=783 loops=1)\"\n> \" Output:\n> \" Buffers: shared hit=7160 read=137\"\n> \" I/O Timings: read=107.771\"\n> \" -> Nested Loop (cost=181.19..1568.99 rows=549 width=177)\n> (actual time=6344.389..6470.549 rows=783 loops=1)\"\n> \" Output:\n> \" Buffers: shared hit=3931 read=137\"\n> \" I/O Timings: read=107.771\"\n> \" -> Bitmap Heap Scan on public.messages\n> (cost=181.19..798.78 rows=549 width=177) (actual time=6344.342..6436.117\n> rows=783 loops=1)\"\n> \" Output:\n> \" Recheck Cond:\n> \" Buffers: shared hit=707 read=137\"\n> \" I/O Timings: read=107.771\"\n> \" -> BitmapOr (cost=181.19..181.19 rows=549\n> width=0) (actual time=6344.226..6344.226 rows=0 loops=1)\"\n> \" Buffers: shared hit=120 read=20\"\n> \" I/O Timings: read=37.085\"\n> \" -> Bitmap Index Scan on\n> messages_sender_type_created_at_idx (cost=0.00..23.41 rows=309 width=0)\n> (actual time=6343.358..6343.358 rows=366 loops=1)\"\n> \" Index Cond:\n> \" Buffers: shared hit=26 read=15\"\n> \" I/O Timings: read=36.977\"\n> \" -> Bitmap Index Scan on\n> messages_type_sender_recipient_created_at\n> \" Buffers: shared hit=94 read=5\"\n> \" I/O Timings: read=0.108\"\n> \" -> Index Only Scan using profiles_pkey on\n> public.profiles (cost=0.00..1.39 rows=1 width=4) (actual time=0.018..0.024\n> rows=1 loops=783)\"\n> \" Output: profiles.id\"\n> \" Index Cond: (profiles.id = messages.sender)\"\n> \" Heap Fetches: 661\"\n> \" Buffers: shared hit=3224\"\n> \" -> Index Only Scan using profiles_pkey on public.profiles\n> recipient_profiles_messages (cost=0.00..1.39 rows=1 width=4) (actual\n> time=0.014..0.018 rows=1 loops=783)\"\n> \" Output: recipient_profiles_messages.id\"\n> \" Index Cond: (recipient_profiles_messages.id =\n> messages.recipient)\"\n> \" Heap Fetches: 667\"\n> \" Buffers: shared hit=3229\"\n> \"Total runtime: 6509.328 ms\"\n>\n>\n>\n> \"Limit (cost=2366.57..2367.82 rows=500 width=177) (actual time=\n> 73.284..76.296 rows=500 loops=1)\"\n> \" Output: various columns\"\n> \" Buffers: shared hit=6738 read=562\"\n> \" I/O Timings: read=19.212\"\n> \" -> Sort (cost=2366.57..2367.94 rows=549 width=177) (actual time=\n> 73.276..74.300 rows=500 loops=1)\"\n> \" Output: various columns\"\n> \" Sort Key: messages.created_at\"\n> \" Sort Method: quicksort Memory: 294kB\"\n> \" Buffers: shared hit=6738 read=562\"\n> \" I/O Timings: read=19.212\"\n> \" -> Nested Loop (cost=181.19..2341.59 rows=549 width=177)\n> (actual time=3.556..69.866 rows=783 loops=1)\"\n> \" Output: various columns\n> \" Buffers: shared hit=6735 read=562\"\n> \" I/O Timings: read=19.212\"\n> \" -> Nested Loop (cost=181.19..1570.19 rows=549 width=177)\n> (actual time=3.497..53.820 rows=783 loops=1)\"\n> \" Output: various columns\n> \" Buffers: shared hit=3506 read=562\"\n> \" I/O Timings: read=19.212\"\n> \" -> Bitmap Heap Scan on public.messages\n> (cost=181.19..798.78 rows=549 width=177) (actual time=3.408..32.906\n> rows=783 loops=1)\"\n> \" Output: various columns\n> \" Recheck Cond: ()\n> \" Buffers: shared hit=282 read=562\"\n> \" I/O Timings: read=19.212\"\n> \" -> BitmapOr (cost=181.19..181.19 rows=549\n> width=0) (actual time=3.279..3.279 rows=0 loops=1)\"\n> \" Buffers: shared hit=114 read=26\"\n> \" I/O Timings: read=1.755\"\n> \" -> Bitmap Index Scan on\n> messages_sender_type_created_at_idx (cost=0.00..23.41 rows=309 width=0)\n> (actual time=1.882..1.882 rows=366 loops=1)\"\n> \" Index Cond:\n> \" Buffers: shared hit=25 read=16\"\n> \" I/O Timings: read=1.085\"\n> \" -> Bitmap Index Scan on\n> \" Buffers: shared hit=89 read=10\"\n> \" I/O Timings: read=0.670\"\n> \" -> Index Only Scan using profiles_pkey on\n> public.profiles (cost=0.00..1.40 rows=1 width=4) (actual time=0.012..0.015\n> rows=1 loops=783)\"\n> \" Output: profiles.id\"\n> \" Index Cond: (profiles.id = messages.sender)\"\n> \" Heap Fetches: 654\"\n> \" Buffers: shared hit=3224\"\n> \" -> Index Only Scan using profiles_pkey on public.profiles\n> recipient_profiles_messages (cost=0.00..1.40 rows=1 width=4) (actual\n> time=0.007..0.009 rows=1 loops=783)\"\n> \" Output: recipient_profiles_messages.id\"\n> \" Index Cond: (recipient_profiles_messages.id =\n> messages.recipient)\"\n> \" Heap Fetches: 647\"\n> \" Buffers: shared hit=3229\"\n> \"Total runtime: 77.528 ms\"\n>\n>\n>\n> On Tue, Apr 2, 2013 at 2:40 AM, Armand du Plessis <[email protected]> wrote:\n>\n>> Hi Jeff,\n>>\n>> Sorry I should've mentioned the new instance is Postgres 9.2.3. The old\n>> instance IO maxed out the disk/io available on a single EBS volume on AWS.\n>> It had 2000 PIOPS but was constantly bottlenecked. I assumed that striping\n>> 6 1000 IOPS volumes in RAID-0 would give me some breathing space on that\n>> front, and looking at the iostat (just included in previous email) it seems\n>> to be doing OK.\n>>\n>> I actually had pg_pool running as a test but to avoid having too many\n>> moving parts in the change removed it from the equation. Need to look into\n>> the proper configuration so it doesn't saturate my cluster worse than I'm\n>> doing myself.\n>>\n>> I've commented inline.\n>>\n>> Regards,\n>>\n>> Armand\n>>\n>> PS. This is probably the most helpful mailing list I've ever come across.\n>> Starting to feel a little more that it can be solved.\n>>\n>>\n>> On Tue, Apr 2, 2013 at 2:21 AM, Jeff Janes <[email protected]> wrote:\n>>\n>>> On Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <[email protected]> wrote:\n>>>\n>>>> [Apologies, I first sent this to the incorrect list, postgres-admin, in\n>>>> the event you receive it twice]\n>>>>\n>>>> Hi there,\n>>>>\n>>>> I'm hoping someone on the list can shed some light on an issue I'm\n>>>> having with our Postgresql cluster. I'm literally tearing out my hair and\n>>>> don't have a deep enough understanding of Postgres to find the problem.\n>>>>\n>>>> What's happening is I had severe disk/io issues on our original\n>>>> Postgres cluster (9.0.8)\n>>>>\n>>> and switched to a new instance with a RAID-0 volume array.\n>>>>\n>>>\n>>> What was the old instance IO? Did you do IO benchmarking on both?\n>>>\n>>>\n>>>> The machine's CPU usage would hover around 30% and our database would\n>>>> run lightning fast with pg_locks hovering between 100-200.\n>>>>\n>>>> Within a few seconds something would trigger a massive increase in\n>>>> pg_locks so that it suddenly shoots up to 4000-8000. At this point\n>>>> everything dies. Queries that usually take a few milliseconds takes minutes\n>>>> and everything is unresponsive until I restart postgres.\n>>>>\n>>>\n>>> I think that pg_locks is pretty much a red herring. All it means is\n>>> that you have a lot more active connections than you used to. All active\n>>> connections are going to hold various locks, while most idle connections\n>>> (other than 'idle in transaction') connections will not hold any.\n>>>\n>>> Although I doubt it will solve this particular problem, you should\n>>> probably use a connection pooler.\n>>>\n>>>\n>>>\n>>>> shared_buffers = 32GB\n>>>>\n>>>\n>>> That seems very high. There are reports that using >8 GB leads to\n>>> precisely the type of problem you are seeing (checkpoint associated\n>>> freezes). Although I've never seen those reports when fsync=off.\n>>>\n>>> I thought you might be suffering from the problem solved in release 9.1\n>>> by item \"Merge duplicate fsync requests (Robert Haas, Greg Smith)\", but\n>>> then I realized that with fsync=off it could not be that.\n>>>\n>>>\n>>>\n>>>>\n>>>> max_connections = 800\n>>>>\n>>>\n>>> That also is very high.\n>>>\n>>>\n>>>> The problems seem to overlap with checkpoints.\n>>>>\n>>>> 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01\n>>>> 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n>>>> 2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01\n>>>> 21:21:35 UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers\n>>>> (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled;\n>>>> write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000\n>>>> s, average=0.000 s\",,,,,,,,,\"\"\n>>>> 2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01\n>>>> 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n>>>>\n>>>\n>>>\n>>> I think you changed checkpoint_timout from default (5 min) to 10\n>>> minutes, without telling us. Anyway, this is where it would be nice to\n>>> know how much of the 539.439 s in the write phase was spent blocking on\n>>> writes, and how much was spent napping. But that info is not collected by\n>>> pgsql.\n>>>\n>>\n>> I did actually change it to 25 minutes. Apologies it was probably lost in\n>> the text of a previous email. Here's the changed settings:\n>>\n>> # - Background Writer -\n>>\n>> bgwriter_delay = 200ms # 10-10000ms between rounds\n>> bgwriter_lru_maxpages = 400 # 0-1000 max buffers written/round\n>> bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\n>> scanned/round\n>>\n>> checkpoint_segments = 128\n>> checkpoint_timeout = 25min\n>>\n>> It seems to be lasting longer with these settings.\n>>\n>>\n>>>\n>>> Your top output looked for it was a time at which there were no\n>>> problems, and it didn't include the top processes, so it wasn't very\n>>> informative.\n>>>\n>>> If you could upgrade to 9.2 and capture some data with track_io_timing,\n>>> that could be useful.\n>>>\n>>\n>> I'm looking into track_io_timing.\n>>\n>>\n>>>\n>>> Cheers,\n>>>\n>>> Jeff\n>>>\n>>\n>>\n>\n\nArmand,All of the symptoms you describe line up perfectly with a problem I had recently when upgrading DB hardware.Everything ran find until we hit some threshold somewhere at which point the locks would pile up in the thousands just as you describe, all while we were not I/O bound.\nI was moving from a DELL 810 that used a flex memory bridge to a DELL 820 that used round robin on their quad core intels.(Interestingly we also found out that DELL is planning on rolling back to the flex memory bridge later this year.)\nAny chance you could find out if your old processors might have been using flex while you're new processors might be using round robin?-s\nOn Mon, Apr 1, 2013 at 5:59 PM, Armand du Plessis <[email protected]> wrote:\nI've run an EXPLAIN ANALYZE on one of the queries that appeared in the pg_locks (although like you say that might be a red herring) both during normal response times (2) and also after the locks backlog materialized (1) \nThe output below, I've just blanked out some columns. The IO timings do seem an order of magnitude slower but not excessive unless I'm reading it wrong. \"Limit (cost=2364.19..2365.44 rows=500 width=177) (actual time=6501.103..6507.196 rows=500 loops=1)\"\n\" Output: \" Buffers: shared hit=7163 read=137\"\" I/O Timings: read=107.771\"\n\" -> Sort (cost=2364.19..2365.56 rows=549 width=177) (actual time=6501.095..6503.216 rows=500 loops=1)\"\" Output: \n\" Sort Key: messages.created_at\"\" Sort Method: quicksort Memory: 294kB\"\" Buffers: shared hit=7163 read=137\"\n\" I/O Timings: read=107.771\"\" -> Nested Loop (cost=181.19..2339.21 rows=549 width=177) (actual time=6344.410..6495.377 rows=783 loops=1)\"\n\" Output: \" Buffers: shared hit=7160 read=137\"\" I/O Timings: read=107.771\"\n\" -> Nested Loop (cost=181.19..1568.99 rows=549 width=177) (actual time=6344.389..6470.549 rows=783 loops=1)\"\" Output: \n\" Buffers: shared hit=3931 read=137\"\" I/O Timings: read=107.771\"\n\" -> Bitmap Heap Scan on public.messages (cost=181.19..798.78 rows=549 width=177) (actual time=6344.342..6436.117 rows=783 loops=1)\"\n\n\" Output: \" Recheck Cond: \" Buffers: shared hit=707 read=137\"\n\" I/O Timings: read=107.771\"\" -> BitmapOr (cost=181.19..181.19 rows=549 width=0) (actual time=6344.226..6344.226 rows=0 loops=1)\"\n\" Buffers: shared hit=120 read=20\"\" I/O Timings: read=37.085\"\n\" -> Bitmap Index Scan on messages_sender_type_created_at_idx (cost=0.00..23.41 rows=309 width=0) (actual time=6343.358..6343.358 rows=366 loops=1)\"\n\" Index Cond: \" Buffers: shared hit=26 read=15\"\n\" I/O Timings: read=36.977\"\" -> Bitmap Index Scan on messages_type_sender_recipient_created_at \n\" Buffers: shared hit=94 read=5\"\" I/O Timings: read=0.108\"\n\" -> Index Only Scan using profiles_pkey on public.profiles (cost=0.00..1.39 rows=1 width=4) (actual time=0.018..0.024 rows=1 loops=783)\"\n\" Output: profiles.id\"\" Index Cond: (profiles.id = messages.sender)\"\n\" Heap Fetches: 661\"\" Buffers: shared hit=3224\"\n\" -> Index Only Scan using profiles_pkey on public.profiles recipient_profiles_messages (cost=0.00..1.39 rows=1 width=4) (actual time=0.014..0.018 rows=1 loops=783)\"\n\" Output: recipient_profiles_messages.id\"\" Index Cond: (recipient_profiles_messages.id = messages.recipient)\"\n\" Heap Fetches: 667\"\" Buffers: shared hit=3229\"\"Total runtime: 6509.328 ms\"\n\"Limit (cost=2366.57..2367.82 rows=500 width=177) (actual time=73.284..76.296 rows=500 loops=1)\"\n\" Output: various columns\"\" Buffers: shared hit=6738 read=562\"\" I/O Timings: read=19.212\"\n\" -> Sort (cost=2366.57..2367.94 rows=549 width=177) (actual time=73.276..74.300 rows=500 loops=1)\"\n\" Output: various columns\"\n\" Sort Key: messages.created_at\"\" Sort Method: quicksort Memory: 294kB\"\" Buffers: shared hit=6738 read=562\"\n\" I/O Timings: read=19.212\"\" -> Nested Loop (cost=181.19..2341.59 rows=549 width=177) (actual time=3.556..69.866 rows=783 loops=1)\"\n\" Output: various columns\" Buffers: shared hit=6735 read=562\"\" I/O Timings: read=19.212\"\n\" -> Nested Loop (cost=181.19..1570.19 rows=549 width=177) (actual time=3.497..53.820 rows=783 loops=1)\"\" Output: various columns\n\" Buffers: shared hit=3506 read=562\"\" I/O Timings: read=19.212\"\n\" -> Bitmap Heap Scan on public.messages (cost=181.19..798.78 rows=549 width=177) (actual time=3.408..32.906 rows=783 loops=1)\"\n\" Output: various columns\n\" Recheck Cond: ()\" Buffers: shared hit=282 read=562\"\n\" I/O Timings: read=19.212\"\" -> BitmapOr (cost=181.19..181.19 rows=549 width=0) (actual time=3.279..3.279 rows=0 loops=1)\"\n\" Buffers: shared hit=114 read=26\"\" I/O Timings: read=1.755\"\n\" -> Bitmap Index Scan on messages_sender_type_created_at_idx (cost=0.00..23.41 rows=309 width=0) (actual time=1.882..1.882 rows=366 loops=1)\"\n\" Index Cond: \" Buffers: shared hit=25 read=16\"\n\" I/O Timings: read=1.085\"\" -> Bitmap Index Scan on \n\" Buffers: shared hit=89 read=10\"\" I/O Timings: read=0.670\"\n\" -> Index Only Scan using profiles_pkey on public.profiles (cost=0.00..1.40 rows=1 width=4) (actual time=0.012..0.015 rows=1 loops=783)\"\n\" Output: profiles.id\"\" Index Cond: (profiles.id = messages.sender)\"\n\" Heap Fetches: 654\"\" Buffers: shared hit=3224\"\n\" -> Index Only Scan using profiles_pkey on public.profiles recipient_profiles_messages (cost=0.00..1.40 rows=1 width=4) (actual time=0.007..0.009 rows=1 loops=783)\"\n\" Output: recipient_profiles_messages.id\"\" Index Cond: (recipient_profiles_messages.id = messages.recipient)\"\n\" Heap Fetches: 647\"\" Buffers: shared hit=3229\"\"Total runtime: 77.528 ms\"\nOn Tue, Apr 2, 2013 at 2:40 AM, Armand du Plessis <[email protected]> wrote:\nHi Jeff, Sorry I should've mentioned the new instance is Postgres 9.2.3. The old instance IO maxed out the disk/io available on a single EBS volume on AWS. It had 2000 PIOPS but was constantly bottlenecked. I assumed that striping 6 1000 IOPS volumes in RAID-0 would give me some breathing space on that front, and looking at the iostat (just included in previous email) it seems to be doing OK. \nI actually had pg_pool running as a test but to avoid having too many moving parts in the change removed it from the equation. Need to look into the proper configuration so it doesn't saturate my cluster worse than I'm doing myself.\nI've commented inline. Regards,ArmandPS. This is probably the most helpful mailing list I've ever come across. Starting to feel a little more that it can be solved. \nOn Tue, Apr 2, 2013 at 2:21 AM, Jeff Janes <[email protected]> wrote:\nOn Mon, Apr 1, 2013 at 3:35 PM, Armand du Plessis <[email protected]> wrote:\n\n[Apologies, I first sent this to the incorrect list, postgres-admin, in the event you receive it twice]\nHi there,\nI'm hoping someone on the list can shed some light on an issue I'm having with our Postgresql cluster. I'm literally tearing out my hair and don't have a deep enough understanding of Postgres to find the problem. \nWhat's happening is I had severe disk/io issues on our original Postgres cluster (9.0.8) \n\nand switched to a new instance with a RAID-0 volume array.\nWhat was the old instance IO? Did you do IO benchmarking on both? \n The machine's CPU usage would hover around 30% and our database would run lightning fast with pg_locks hovering between 100-200. \nWithin a few seconds something would trigger a massive increase in pg_locks so that it suddenly shoots up to 4000-8000. At this point everything dies. Queries that usually take a few milliseconds takes minutes and everything is unresponsive until I restart postgres. \nI think that pg_locks is pretty much a red herring. All it means is that you have a lot more active connections than you used to. All active connections are going to hold various locks, while most idle connections (other than 'idle in transaction') connections will not hold any.\nAlthough I doubt it will solve this particular problem, you should probably use a connection pooler.\n\nshared_buffers = 32GBThat seems very high. There are reports that using >8 GB leads to precisely the type of problem you are seeing (checkpoint associated freezes). Although I've never seen those reports when fsync=off. \nI thought you might be suffering from the problem solved in release 9.1 by item \"Merge duplicate fsync requests (Robert Haas, Greg Smith)\", but then I realized that with fsync=off it could not be that.\n \nmax_connections = 800\nThat also is very high.\nThe problems seem to overlap with checkpoints. 2013-04-01 21:31:35.592 UTC,,,26877,,5159fa5f.68fd,1,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\n2013-04-01 21:40:35.033 UTC,,,26877,,5159fa5f.68fd,2,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint complete: wrote 100635 buffers (2.4%); 0 transaction log file(s) added, 0 removed, 1 recycled; write=539.439 s, sync=0.000 s, total=539.441 s; sync files=0, longest=0.000 s, average=0.000 s\",,,,,,,,,\"\"\n2013-04-01 21:41:35.093 UTC,,,26877,,5159fa5f.68fd,3,,2013-04-01 21:21:35 UTC,,0,LOG,00000,\"checkpoint starting: time\",,,,,,,,,\"\"\nI think you changed checkpoint_timout from default (5 min) to 10 minutes, without telling us. Anyway, this is where it would be nice to know how much of the 539.439 s in the write phase was spent blocking on writes, and how much was spent napping. But that info is not collected by pgsql.\nI did actually change it to 25 minutes. Apologies it was probably lost in the text of a previous email. Here's the changed settings:\n\n# - Background Writer -bgwriter_delay = 200ms # 10-10000ms between rounds\nbgwriter_lru_maxpages = 400 # 0-1000 max buffers written/round\nbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round\ncheckpoint_segments = 128\ncheckpoint_timeout = 25min \nIt seems to be lasting longer with these settings. \n\nYour top output looked for it was a time at which there were no problems, and it didn't include the top processes, so it wasn't very informative. If you could upgrade to 9.2 and capture some data with track_io_timing, that could be useful.\nI'm looking into track_io_timing. \n\nCheers,Jeff",
"msg_date": "Mon, 1 Apr 2013 18:07:47 -0700",
"msg_from": "Steven Crandell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "Hi Steven,\n\nSounds very familiar. Painfully familiar :(\n\nBut I really don't know. All I can see is that in this particular\nconfiguration the instance has 2 x Intel Xeon E5-2670, eight-core\nprocessors. I can't find any info on whether it's flex or round robin. AWS\ntypically don't make the underlying hardware known. The exception is on the\nchip-types on the higher-end instance types which is where I got the info\nabove from.\n\nBelow is an excerpt from atop when the problem occur. The CPUs jump to high\nsys usage, not sure if that was similar to what you saw?\n\nHow did you get it resolved in the end?\n\nATOP - ip-10-155-231-112\n 2013/04/02 01:25:40 ------\n 2s elapsed\n59;169H 0 70.15s | | user 8.19s | |\n | | | #proc 1015 | |\n #zombie 0 | | clones 0 | |\n | | | #exit 2 |\nCPU | sys 3182% | | user 30% | | irq\n 1% | | | | idle 0% |\n | wait 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 98% | | user 1% | | irq\n 1% | | | | idle 0% |\n | cpu000 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 96% | | user 4% | | irq\n 0% | | | | idle 0% |\n | cpu001 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu002 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq\n 0% | | | | idle 0% |\n | cpu003 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu004 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu005 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 98% | | user 2% | | irq\n 0% | | | | idle 0% |\n | cpu006 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq\n 0% | | | | idle 0% |\n | cpu007 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq\n 0% | | | | idle 0% |\n | cpu008 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu009 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq\n 0% | | | | idle 0% |\n | cpu010 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu011 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq\n 0% | | | | idle 0% |\n | cpu012 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 97% | | user 3% | | irq\n 0% | | | | idle 0% |\n | cpu013 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu014 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu015 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu016 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 82% | | user 18% | | irq\n 0% | | | | idle 0% |\n | cpu017 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu018 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu019 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu020 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu021 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu022 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu023 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu024 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu025 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu026 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu027 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq\n 0% | | | | idle 0% |\n | cpu028 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu029 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq\n 0% | | | | idle 0% |\n | cpu030 w 0% | | | |\nsteal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq\n 0% | | | | idle 0% |\n | cpu031 w 0% | | | |\nsteal 0% | | guest 0% |\nCPL | avg1 90.60 | | avg5 60.80 | |\n | avg15 39.77 | | | |\n csw 1011 | | intr 17568 | |\n | | | numcpu 32 |\nMEM | tot 58.5G | | free 418.4M | | cache\n 45.0G | dirty 0.6M | | buff 5.8M | |\n slab 501.2M | | | |\n | | | |\nSWP | tot 0.0M | | free 0.0M | |\n | | | | |\n | | | | |\nvmcom 49.8G | | vmlim 29.3G |\nPAG | scan 1858 | | | | stall 0\n | | | | |\n | | | swin 0 | |\n | | swout 0 |\nNET | transport | tcpi 318 | | tcpo 392 | udpi\n 34 | | udpo 39 | tcpao 0 | | tcppo\n 2 | tcprs 0 | | tcpie 0 | tcpor 0 |\n | udpnp 0 | udpip 0 |\nNET | network | | ipi 357 | | ipo\n 397 | ipfrw 0 | | deliv 357 | |\n | | | | |\nicmpi 0 | | icmpo 0 |\nNET | eth0 ---- | | pcki 318 | pcko 358 |\n | si 200 Kbps | so 947 Kbps | | coll 0 |\n | mlti 0 | erri 0 | | erro 0 |\ndrpi 0 | | drpo 0 |\nNET | lo ---- | | pcki 39 | pcko 39 |\n | si 79 Kbps | so 79 Kbps | | coll 0 |\n | mlti 0 | erri 0 | | erro 0 |\ndrpi 0 | | drpo 0 |\ndebug2: channel 0: window 997757 sent adjust 50819\n\n\nOn Tue, Apr 2, 2013 at 3:07 AM, Steven Crandell\n<[email protected]>wrote:\n\n> Armand,\n>\n> All of the symptoms you describe line up perfectly with a problem I had\n> recently when upgrading DB hardware.\n> Everything ran find until we hit some threshold somewhere at which point\n> the locks would pile up in the thousands just as you describe, all while we\n> were not I/O bound.\n>\n> I was moving from a DELL 810 that used a flex memory bridge to a DELL 820\n> that used round robin on their quad core intels.\n> (Interestingly we also found out that DELL is planning on rolling back to\n> the flex memory bridge later this year.)\n>\n> Any chance you could find out if your old processors might have been using\n> flex while you're new processors might be using round robin?\n>\n> -s\n>\n>\n>\n\nHi Steven, Sounds very familiar. Painfully familiar :( \nBut I really don't know. All I can see is that in this particular configuration the instance has 2 x Intel Xeon E5-2670, eight-core processors. I can't find any info on whether it's flex or round robin. AWS typically don't make the underlying hardware known. The exception is on the chip-types on the higher-end instance types which is where I got the info above from. \nBelow is an excerpt from atop when the problem occur. The CPUs jump to high sys usage, not sure if that was similar to what you saw?\nHow did you get it resolved in the end?\nATOP - ip-10-155-231-112 2013/04/02 01:25:40 ------ 2s elapsed\n59;169H 0 70.15s | | user 8.19s | | | | | #proc 1015 | | #zombie 0 | | clones 0 | | | | | #exit 2 |\nCPU | sys 3182% | | user 30% | | irq 1% | | | | idle 0% | | wait 0% | | | | steal 0% | | guest 0% |\ncpu | sys 98% | | user 1% | | irq 1% | | | | idle 0% | | cpu000 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 96% | | user 4% | | irq 0% | | | | idle 0% | | cpu001 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu002 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq 0% | | | | idle 0% | | cpu003 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu004 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu005 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 98% | | user 2% | | irq 0% | | | | idle 0% | | cpu006 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq 0% | | | | idle 0% | | cpu007 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq 0% | | | | idle 0% | | cpu008 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu009 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq 0% | | | | idle 0% | | cpu010 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu011 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq 0% | | | | idle 0% | | cpu012 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 97% | | user 3% | | irq 0% | | | | idle 0% | | cpu013 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu014 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu015 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu016 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 82% | | user 18% | | irq 0% | | | | idle 0% | | cpu017 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu018 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu019 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu020 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu021 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu022 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu023 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu024 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu025 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu026 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu027 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq 0% | | | | idle 0% | | cpu028 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu029 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 100% | | user 0% | | irq 0% | | | | idle 0% | | cpu030 w 0% | | | | steal 0% | | guest 0% |\ncpu | sys 99% | | user 1% | | irq 0% | | | | idle 0% | | cpu031 w 0% | | | | steal 0% | | guest 0% |\nCPL | avg1 90.60 | | avg5 60.80 | | | avg15 39.77 | | | | csw 1011 | | intr 17568 | | | | | numcpu 32 |\nMEM | tot 58.5G | | free 418.4M | | cache 45.0G | dirty 0.6M | | buff 5.8M | | slab 501.2M | | | | | | | |\nSWP | tot 0.0M | | free 0.0M | | | | | | | | | | | | vmcom 49.8G | | vmlim 29.3G |\nPAG | scan 1858 | | | | stall 0 | | | | | | | | swin 0 | | | | swout 0 |\nNET | transport | tcpi 318 | | tcpo 392 | udpi 34 | | udpo 39 | tcpao 0 | | tcppo 2 | tcprs 0 | | tcpie 0 | tcpor 0 | | udpnp 0 | udpip 0 |\nNET | network | | ipi 357 | | ipo 397 | ipfrw 0 | | deliv 357 | | | | | | | icmpi 0 | | icmpo 0 |\nNET | eth0 ---- | | pcki 318 | pcko 358 | | si 200 Kbps | so 947 Kbps | | coll 0 | | mlti 0 | erri 0 | | erro 0 | drpi 0 | | drpo 0 |\nNET | lo ---- | | pcki 39 | pcko 39 | | si 79 Kbps | so 79 Kbps | | coll 0 | | mlti 0 | erri 0 | | erro 0 | drpi 0 | | drpo 0 |\ndebug2: channel 0: window 997757 sent adjust 50819On Tue, Apr 2, 2013 at 3:07 AM, Steven Crandell <[email protected]> wrote:\nArmand,All of the symptoms you describe line up perfectly with a problem I had recently when upgrading DB hardware.\nEverything ran find until we hit some threshold somewhere at which point the locks would pile up in the thousands just as you describe, all while we were not I/O bound.\nI was moving from a DELL 810 that used a flex memory bridge to a DELL 820 that used round robin on their quad core intels.(Interestingly we also found out that DELL is planning on rolling back to the flex memory bridge later this year.)\nAny chance you could find out if your old processors might have been using flex while you're new processors might be using round robin?\n-s",
"msg_date": "Tue, 2 Apr 2013 04:53:42 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "Yeah, as I understand it you should have 6000 IOPS available for the md \ndevice (ideally).\n\nThe iostats you display certainly look benign... but the key time to be \nsampling would be when you see the lock list explode - could look very \ndifferent then.\n\nRe vm.dirty* - I would crank the values down by a factor of 5:\n\nvm.dirty_background_ratio = 1 (down from 5)\nvm.dirty_ratio = 2 (down from 10)\n\nAssuming of course that you actually are seeing an IO stall (which \nshould be catchable via iostat or iotop)... and not some other issue. \nOtherwise leave 'em alone and keep looking :-)\n\nCheers\n\nMark\n\n\nOn 02/04/13 13:31, Armand du Plessis wrote:\n>\n>\n> I had a look at the iostat output (on a 5s interval) and pasted it\n> below. The utilization and waits seems low. Included a sample below, #1\n> taken during normal operation and then when the locks happen it\n> basically drops to 0 across the board. My (mis)understanding of the IOPS\n> was that it would be 1000 IOPS per/volume and when in RAID0 should give\n> me quite a bit higher throughput than in a single EBS volume setup. (My\n> naive envelop calculation was #volumes * PIOPS = Effective IOPS :/)\n>\n>\n> I'm looking into vm.dirty_background_ratio, vm.dirty_ratio sysctls. Is\n> there any guidance or links available that would be useful as a starting\n> point?\n>\n> Thanks again for the help, I really appreciate it.\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 02 Apr 2013 16:41:06 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Problems with pg_locks explosion"
},
{
"msg_contents": "On Monday, April 1, 2013, Armand du Plessis wrote:\n\n> I've run an EXPLAIN ANALYZE on one of the queries that appeared in the\n> pg_locks (although like you say that might be a red herring) both during\n> normal response times (2) and also after the locks backlog materialized (1)\n>\n> The output below, I've just blanked out some columns. The IO timings do\n> seem an order of magnitude slower but not excessive unless I'm reading it\n> wrong.\n>\n> \"Limit (cost=2364.19..2365.44 rows=500 width=177) (actual\n> time=6501.103..6507.196 rows=500 loops=1)\"\n> \" Output:\n> \" Buffers: shared hit=7163 read=137\"\n> \" I/O Timings: read=107.771\"\n>\n\n...\n\n\n>\n> \"Limit (cost=2366.57..2367.82 rows=500 width=177) (actual\n> time=73.284..76.296 rows=500 loops=1)\"\n> \" Output: various columns\"\n> \" Buffers: shared hit=6738 read=562\"\n> \" I/O Timings: read=19.212\"\n>\n\nYou are correct that the different in IO timing for reads is not nearly\nenough to explain the difference, but the ratio is still large enough to\nperhaps be suggestive. It could be be that all the extra time is spent in\nIO writes (not reported here). If you turn on track_io_timing on\nsystem-wide you could check the write times in pg_stat_database.\n\n(Write time has an attribution problem. I need to make room for my data,\nso I write out someone else's. Is the time spent attributed to the one\ndoing the writing, or the one who owns the data written?)\n\nBut it is perhaps looking like it might not be IO at all, but rather some\nkind of internal kernel problem, such as the \"zone reclaim\" and \"huge\npages\" and memory interleaving, which have been discussed elsewhere in this\nlist for high CPU high RAM machines. I would summarize it for you, but I\ndon't understand it, and don't have ready access to machines with 64 CPUs\nand 128 GB of RAM in order to explore it for myself.\n\nBut if that is the case, then using a connection pooler to restrict the\nnumber of simultaneously active connections might actually be a big win\n(despite what I said previously).\n\nCheers,\n\nJeff\n\n>\n\nOn Monday, April 1, 2013, Armand du Plessis wrote:I've run an EXPLAIN ANALYZE on one of the queries that appeared in the pg_locks (although like you say that might be a red herring) both during normal response times (2) and also after the locks backlog materialized (1) \nThe output below, I've just blanked out some columns. The IO timings do seem an order of magnitude slower but not excessive unless I'm reading it wrong. \"Limit (cost=2364.19..2365.44 rows=500 width=177) (actual time=6501.103..6507.196 rows=500 loops=1)\"\n\" Output: \" Buffers: shared hit=7163 read=137\"\" I/O Timings: read=107.771\"\n ... \n\"Limit (cost=2366.57..2367.82 rows=500 width=177) (actual time=73.284..76.296 rows=500 loops=1)\"\n\" Output: various columns\"\" Buffers: shared hit=6738 read=562\"\" I/O Timings: read=19.212\"\nYou are correct that the different in IO timing for reads is not nearly enough to explain the difference, but the ratio is still large enough to perhaps be suggestive. It could be be that all the extra time is spent in IO writes (not reported here). If you turn on track_io_timing on system-wide you could check the write times in pg_stat_database. \n(Write time has an attribution problem. I need to make room for my data, so I write out someone else's. Is the time spent attributed to the one doing the writing, or the one who owns the data written?)\nBut it is perhaps looking like it might not be IO at all, but rather some kind of internal kernel problem, such as the \"zone reclaim\" and \"huge pages\" and memory interleaving, which have been discussed elsewhere in this list for high CPU high RAM machines. I would summarize it for you, but I don't understand it, and don't have ready access to machines with 64 CPUs and 128 GB of RAM in order to explore it for myself.\nBut if that is the case, then using a connection pooler to restrict the number of simultaneously active connections might actually be a big win (despite what I said previously). \nCheers,Jeff",
"msg_date": "Mon, 1 Apr 2013 23:08:12 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "On Monday, April 1, 2013, Mark Kirkwood wrote:\n\n>\n> Your provisioned volumes are much better than the default AWS ones, but\n> are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth of Postgres\n> 8k buffers). So you may need to look at adding more volumes into the array,\n> or adding some separate ones and putting pg_xlog directory on 'em.\n>\n> However before making changes I would recommend using iostat or sar to\n> monitor how volumes are handling the load (I usually choose a 1 sec\n> granularity and look for 100% util and high - server hundred ms - awaits).\n> Also iotop could be enlightening.\n>\n\nHi Mark,\n\nDo you have experience using these tools with AWS? When using non-DAS in\nother contexts, I've noticed that these tools often give deranged results,\nbecause the kernel doesn't correctly know what time to attribute to\n\"network\" and what to attribute to \"disk\". But I haven't looked into it on\nAWS EBS, maybe they do a better job there.\n\nThanks for any insight,\n\nJeff\n\nOn Monday, April 1, 2013, Mark Kirkwood wrote:\nYour provisioned volumes are much better than the default AWS ones, but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth of Postgres 8k buffers). So you may need to look at adding more volumes into the array, or adding some separate ones and putting pg_xlog directory on 'em.\n\nHowever before making changes I would recommend using iostat or sar to monitor how volumes are handling the load (I usually choose a 1 sec granularity and look for 100% util and high - server hundred ms - awaits). Also iotop could be enlightening.\nHi Mark,Do you have experience using these tools with AWS? When using non-DAS in other contexts, I've noticed that these tools often give deranged results, because the kernel doesn't correctly know what time to attribute to \"network\" and what to attribute to \"disk\". But I haven't looked into it on AWS EBS, maybe they do a better job there.\n Thanks for any insight,Jeff",
"msg_date": "Mon, 1 Apr 2013 23:08:13 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "Thanks Jeff, yup, I'm actually busy setting up pg_pool now. Preliminary\nresults looks promising after switching some client nodes to connect\nthrough the pool.\n\nHere's the output of pg_stat_dabatase but also doesn't seem to be spending\nmore time there either. I'll have a look through the archives for the posts\nyou refer to. Certainly a simple 'upgrade' that's gone south badly here. On\npaper and with earlier tests with partial load this instance is much better\nsuited for our workload, bizarre.\n\nThanks for all the help.\n\n\nuser=# select * from pg_stat_database;\n datid | datname | numbackends | xact_commit | xact_rollback |\nblks_read | blks_hit | tup_returned | tup_fetched | tup_inserted |\ntup_updated | tup_deleted | conflicts | temp_files | temp_bytes |\ndeadlocks | blk_read_time | blk_write_time | stats_\nreset\n-------+-------------------+-------------+-------------+---------------+-----------+--------------+--------------+--------------+--------------+-------------+-------------+-----------+------------+-------------+-----------+---------------+----------------+\n 17671 | production | 232 | 524583730 | 213187 |\n623507632 | 403590060861 | 256853219556 | 185283046675 | 95086454 |\n 66720609 | 4449894 | 0 | 18 | 12076277760 | 0\n| 4689747.806 | 4526.539 | 2013-03-29 16:1\n8:09.334432+00\n\n\n\n\nOn Tue, Apr 2, 2013 at 8:08 AM, Jeff Janes <[email protected]> wrote:\n\n> On Monday, April 1, 2013, Armand du Plessis wrote:\n>\n>> I've run an EXPLAIN ANALYZE on one of the queries that appeared in the\n>> pg_locks (although like you say that might be a red herring) both during\n>> normal response times (2) and also after the locks backlog materialized (1)\n>>\n>> The output below, I've just blanked out some columns. The IO timings do\n>> seem an order of magnitude slower but not excessive unless I'm reading it\n>> wrong.\n>>\n>> \"Limit (cost=2364.19..2365.44 rows=500 width=177) (actual\n>> time=6501.103..6507.196 rows=500 loops=1)\"\n>> \" Output:\n>> \" Buffers: shared hit=7163 read=137\"\n>> \" I/O Timings: read=107.771\"\n>>\n>\n> ...\n>\n>\n>>\n>> \"Limit (cost=2366.57..2367.82 rows=500 width=177) (actual time=\n>> 73.284..76.296 rows=500 loops=1)\"\n>> \" Output: various columns\"\n>> \" Buffers: shared hit=6738 read=562\"\n>> \" I/O Timings: read=19.212\"\n>>\n>\n> You are correct that the different in IO timing for reads is not nearly\n> enough to explain the difference, but the ratio is still large enough to\n> perhaps be suggestive. It could be be that all the extra time is spent in\n> IO writes (not reported here). If you turn on track_io_timing on\n> system-wide you could check the write times in pg_stat_database.\n>\n> (Write time has an attribution problem. I need to make room for my data,\n> so I write out someone else's. Is the time spent attributed to the one\n> doing the writing, or the one who owns the data written?)\n>\n> But it is perhaps looking like it might not be IO at all, but rather some\n> kind of internal kernel problem, such as the \"zone reclaim\" and \"huge\n> pages\" and memory interleaving, which have been discussed elsewhere in this\n> list for high CPU high RAM machines. I would summarize it for you, but I\n> don't understand it, and don't have ready access to machines with 64 CPUs\n> and 128 GB of RAM in order to explore it for myself.\n>\n> But if that is the case, then using a connection pooler to restrict the\n> number of simultaneously active connections might actually be a big win\n> (despite what I said previously).\n>\n> Cheers,\n>\n> Jeff\n>\n\nThanks Jeff, yup, I'm actually busy setting up pg_pool now. Preliminary results looks promising after switching some client nodes to connect through the pool. \nHere's the output of pg_stat_dabatase but also doesn't seem to be spending more time there either. I'll have a look through the archives for the posts you refer to. Certainly a simple 'upgrade' that's gone south badly here. On paper and with earlier tests with partial load this instance is much better suited for our workload, bizarre. \nThanks for all the help. user=# select * from pg_stat_database; datid | datname | numbackends | xact_commit | xact_rollback | blks_read | blks_hit | tup_returned | tup_fetched | tup_inserted | tup_updated | tup_deleted | conflicts | temp_files | temp_bytes | deadlocks | blk_read_time | blk_write_time | stats_\nreset -------+-------------------+-------------+-------------+---------------+-----------+--------------+--------------+--------------+--------------+-------------+-------------+-----------+------------+-------------+-----------+---------------+----------------+\n 17671 | production | 232 | 524583730 | 213187 | 623507632 | 403590060861 | 256853219556 | 185283046675 | 95086454 | 66720609 | 4449894 | 0 | 18 | 12076277760 | 0 | 4689747.806 | 4526.539 | 2013-03-29 16:1\n8:09.334432+00\nOn Tue, Apr 2, 2013 at 8:08 AM, Jeff Janes <[email protected]> wrote:\nOn Monday, April 1, 2013, Armand du Plessis wrote:\nI've run an EXPLAIN ANALYZE on one of the queries that appeared in the pg_locks (although like you say that might be a red herring) both during normal response times (2) and also after the locks backlog materialized (1) \nThe output below, I've just blanked out some columns. The IO timings do seem an order of magnitude slower but not excessive unless I'm reading it wrong. \"Limit (cost=2364.19..2365.44 rows=500 width=177) (actual time=6501.103..6507.196 rows=500 loops=1)\"\n\" Output: \" Buffers: shared hit=7163 read=137\"\" I/O Timings: read=107.771\"\n ... \n\n\"Limit (cost=2366.57..2367.82 rows=500 width=177) (actual time=73.284..76.296 rows=500 loops=1)\"\n\" Output: various columns\"\" Buffers: shared hit=6738 read=562\"\" I/O Timings: read=19.212\"\nYou are correct that the different in IO timing for reads is not nearly enough to explain the difference, but the ratio is still large enough to perhaps be suggestive. It could be be that all the extra time is spent in IO writes (not reported here). If you turn on track_io_timing on system-wide you could check the write times in pg_stat_database. \n(Write time has an attribution problem. I need to make room for my data, so I write out someone else's. Is the time spent attributed to the one doing the writing, or the one who owns the data written?)\nBut it is perhaps looking like it might not be IO at all, but rather some kind of internal kernel problem, such as the \"zone reclaim\" and \"huge pages\" and memory interleaving, which have been discussed elsewhere in this list for high CPU high RAM machines. I would summarize it for you, but I don't understand it, and don't have ready access to machines with 64 CPUs and 128 GB of RAM in order to explore it for myself.\nBut if that is the case, then using a connection pooler to restrict the number of simultaneously active connections might actually be a big win (despite what I said previously). \n\n\n\nCheers,Jeff",
"msg_date": "Tue, 2 Apr 2013 08:21:47 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "On 02/04/13 19:08, Jeff Janes wrote:\n> On Monday, April 1, 2013, Mark Kirkwood wrote:\n>\n>\n> Your provisioned volumes are much better than the default AWS ones,\n> but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth\n> of Postgres 8k buffers). So you may need to look at adding more\n> volumes into the array, or adding some separate ones and putting\n> pg_xlog directory on 'em.\n>\n> However before making changes I would recommend using iostat or sar\n> to monitor how volumes are handling the load (I usually choose a 1\n> sec granularity and look for 100% util and high - server hundred ms\n> - awaits). Also iotop could be enlightening.\n>\n>\n> Hi Mark,\n>\n> Do you have experience using these tools with AWS? When using non-DAS\n> in other contexts, I've noticed that these tools often give deranged\n> results, because the kernel doesn't correctly know what time to\n> attribute to \"network\" and what to attribute to \"disk\". But I haven't\n> looked into it on AWS EBS, maybe they do a better job there.\n> Thanks for any insight,\n>\n\n\n\nHi Jeff,\n\nThat is a very good point. I did notice a reasonable amount of network \ntraffic on the graphs posted previously, along with a suspiciously low \namount of IO for md127 (which I assume is the raid0 array)...and \nwondered if iostat was not seeing IO fully, however it slipped my mind \n(I am on leave with kittens - so claim that for the purrrfect excuse)!\n\nHowever I don't recall there being a problem with the io tools for \nstandard EBS volumes - but I haven't benchmarked AWS for a over a year, \nso things could be different now - and I have no experience with these \nnew provisioned volumes.\n\nArmand - it might be instructive to do some benchmarking (with another \nhost and volume set) where you do something like:\n\n$ dd if=/dev/zero of=file bs=8k count=1000000\n\nand see if iostat and friends actually show you doing IO as expected!\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 02 Apr 2013 19:25:00 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "I had my reservations about my almost 0% IO usage on the raid0 array as\nwell. I'm looking at the numbers in atop and it doesn't seem to reflect the\naggregate of the volumes as one would expect. I'm just happy I am seeing\nnumbers on the volumes, they're not too bad.\n\nOne thing I was wondering, as a last possible IO resort. Provisioned EBS\nvolumes requires that you maintain a wait queue of 1 for every 200\nprovisioned IOPS to get reliable IO. My wait queue hovers between 0-1 and\nwith the 1000 IOPS it should be 5. Even thought about artificially pushing\nmore IO to the volumes but I think Jeff's right, there's some internal\nkernel voodoo at play here. I have a feeling it'll be under control with\npg_pool (if I can just get the friggen setup there right) and then I'll\nhave more time to dig into it deeper.\n\nApologies to the kittens for the interrupting your leave :)\n\n\nOn Tue, Apr 2, 2013 at 8:25 AM, Mark Kirkwood <[email protected]\n> wrote:\n\n> On 02/04/13 19:08, Jeff Janes wrote:\n>\n>> On Monday, April 1, 2013, Mark Kirkwood wrote:\n>>\n>>\n>> Your provisioned volumes are much better than the default AWS ones,\n>> but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth\n>> of Postgres 8k buffers). So you may need to look at adding more\n>> volumes into the array, or adding some separate ones and putting\n>> pg_xlog directory on 'em.\n>>\n>> However before making changes I would recommend using iostat or sar\n>> to monitor how volumes are handling the load (I usually choose a 1\n>> sec granularity and look for 100% util and high - server hundred ms\n>> - awaits). Also iotop could be enlightening.\n>>\n>>\n>> Hi Mark,\n>>\n>> Do you have experience using these tools with AWS? When using non-DAS\n>> in other contexts, I've noticed that these tools often give deranged\n>> results, because the kernel doesn't correctly know what time to\n>> attribute to \"network\" and what to attribute to \"disk\". But I haven't\n>> looked into it on AWS EBS, maybe they do a better job there.\n>> Thanks for any insight,\n>>\n>>\n>\n>\n> Hi Jeff,\n>\n> That is a very good point. I did notice a reasonable amount of network\n> traffic on the graphs posted previously, along with a suspiciously low\n> amount of IO for md127 (which I assume is the raid0 array)...and wondered\n> if iostat was not seeing IO fully, however it slipped my mind (I am on\n> leave with kittens - so claim that for the purrrfect excuse)!\n>\n> However I don't recall there being a problem with the io tools for\n> standard EBS volumes - but I haven't benchmarked AWS for a over a year, so\n> things could be different now - and I have no experience with these new\n> provisioned volumes.\n>\n> Armand - it might be instructive to do some benchmarking (with another\n> host and volume set) where you do something like:\n>\n> $ dd if=/dev/zero of=file bs=8k count=1000000\n>\n> and see if iostat and friends actually show you doing IO as expected!\n>\n>\n>\n>\n\nI had my reservations about my almost 0% IO usage on the raid0 array as well. I'm looking at the numbers in atop and it doesn't seem to reflect the aggregate of the volumes as one would expect. I'm just happy I am seeing numbers on the volumes, they're not too bad. \nOne thing I was wondering, as a last possible IO resort. Provisioned EBS volumes requires that you maintain a wait queue of 1 for every 200 provisioned IOPS to get reliable IO. My wait queue hovers between 0-1 and with the 1000 IOPS it should be 5. Even thought about artificially pushing more IO to the volumes but I think Jeff's right, there's some internal kernel voodoo at play here. I have a feeling it'll be under control with pg_pool (if I can just get the friggen setup there right) and then I'll have more time to dig into it deeper. \nApologies to the kittens for the interrupting your leave :) On Tue, Apr 2, 2013 at 8:25 AM, Mark Kirkwood <[email protected]> wrote:\nOn 02/04/13 19:08, Jeff Janes wrote:\n\nOn Monday, April 1, 2013, Mark Kirkwood wrote:\n\n\n Your provisioned volumes are much better than the default AWS ones,\n but are still not hugely fast (i.e 1000 IOPS is about 8 MB/s worth\n of Postgres 8k buffers). So you may need to look at adding more\n volumes into the array, or adding some separate ones and putting\n pg_xlog directory on 'em.\n\n However before making changes I would recommend using iostat or sar\n to monitor how volumes are handling the load (I usually choose a 1\n sec granularity and look for 100% util and high - server hundred ms\n - awaits). Also iotop could be enlightening.\n\n\nHi Mark,\n\nDo you have experience using these tools with AWS? When using non-DAS\nin other contexts, I've noticed that these tools often give deranged\nresults, because the kernel doesn't correctly know what time to\nattribute to \"network\" and what to attribute to \"disk\". But I haven't\nlooked into it on AWS EBS, maybe they do a better job there.\nThanks for any insight,\n\n\n\n\n\nHi Jeff,\n\nThat is a very good point. I did notice a reasonable amount of network traffic on the graphs posted previously, along with a suspiciously low amount of IO for md127 (which I assume is the raid0 array)...and wondered if iostat was not seeing IO fully, however it slipped my mind (I am on leave with kittens - so claim that for the purrrfect excuse)!\n\nHowever I don't recall there being a problem with the io tools for standard EBS volumes - but I haven't benchmarked AWS for a over a year, so things could be different now - and I have no experience with these new provisioned volumes.\n\nArmand - it might be instructive to do some benchmarking (with another host and volume set) where you do something like:\n\n$ dd if=/dev/zero of=file bs=8k count=1000000\n\nand see if iostat and friends actually show you doing IO as expected!",
"msg_date": "Tue, 2 Apr 2013 08:33:29 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "Also it is worth checking what your sysctl vm.zone_reclaim_mode is set \nto - if 1 then override to 0. As Jeff mentioned, this gotcha for larger \ncpu number machines has been discussed at length on this list - but \nstill traps us now and again!\n\nCheers\n\nMark\n\nOn 02/04/13 19:33, Armand du Plessis wrote:\n> I had my reservations about my almost 0% IO usage on the raid0 array as\n> well. I'm looking at the numbers in atop and it doesn't seem to reflect\n> the aggregate of the volumes as one would expect. I'm just happy I am\n> seeing numbers on the volumes, they're not too bad.\n>\n> One thing I was wondering, as a last possible IO resort. Provisioned EBS\n> volumes requires that you maintain a wait queue of 1 for every 200\n> provisioned IOPS to get reliable IO. My wait queue hovers between 0-1\n> and with the 1000 IOPS it should be 5. Even thought about artificially\n> pushing more IO to the volumes but I think Jeff's right, there's some\n> internal kernel voodoo at play here. I have a feeling it'll be under\n> control with pg_pool (if I can just get the friggen setup there right)\n> and then I'll have more time to dig into it deeper.\n>\n> Apologies to the kittens for the interrupting your leave :)\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 02 Apr 2013 19:43:34 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "Touch wood but I think I found the problem thanks to these pointers. I\nchecked the vm.zone_reclaim_mode and mine was set to 0. However just before\nthe locking starts I can see many of my CPUs flashing red and jump to high\npercentage sys usage. When I look at top it's the migration kernel tasks\nthat seem to trigger it.\n\nSo it seems it was a bit trigger happy with task migrations, setting\nthe kernel.sched_migration_cost\nto 5000000 (5ms) seemed to have resolved my woes. I'm yet to see locks\nclimb and it's been running stable for a bit. This post was invaluable in\nexplaining the cause ->\nhttp://www.postgresql.org/message-id/[email protected]\n\n# Postgres Kernel Tweaks\nkernel.sched_migration_cost = 5000000\n# kernel.sched_autogroup_enabled = 0\n\nThe second recommended setting 'sched_autogroup_enabled' is not available\non the kernel I'm running but it doesn't seem to be a problem.\n\nAgain, thanks again for the help. It was seriously appreciated. Long night\nwas long.\n\nIf things change and the problem pops up again I'll update you guys.\n\nCheers,\n\nArmand\n\n\nOn Tue, Apr 2, 2013 at 8:43 AM, Mark Kirkwood <[email protected]\n> wrote:\n\n> Also it is worth checking what your sysctl vm.zone_reclaim_mode is set to\n> - if 1 then override to 0. As Jeff mentioned, this gotcha for larger cpu\n> number machines has been discussed at length on this list - but still traps\n> us now and again!\n>\n> Cheers\n>\n> Mark\n>\n>\n> On 02/04/13 19:33, Armand du Plessis wrote:\n>\n>> I had my reservations about my almost 0% IO usage on the raid0 array as\n>> well. I'm looking at the numbers in atop and it doesn't seem to reflect\n>> the aggregate of the volumes as one would expect. I'm just happy I am\n>> seeing numbers on the volumes, they're not too bad.\n>>\n>> One thing I was wondering, as a last possible IO resort. Provisioned EBS\n>> volumes requires that you maintain a wait queue of 1 for every 200\n>> provisioned IOPS to get reliable IO. My wait queue hovers between 0-1\n>> and with the 1000 IOPS it should be 5. Even thought about artificially\n>> pushing more IO to the volumes but I think Jeff's right, there's some\n>> internal kernel voodoo at play here. I have a feeling it'll be under\n>> control with pg_pool (if I can just get the friggen setup there right)\n>> and then I'll have more time to dig into it deeper.\n>>\n>> Apologies to the kittens for the interrupting your leave :)\n>>\n>>\n>\n\nTouch wood but I think I found the problem thanks to these pointers. I checked the vm.zone_reclaim_mode and mine was set to 0. However just before the locking starts I can see many of my CPUs flashing red and jump to high percentage sys usage. When I look at top it's the migration kernel tasks that seem to trigger it. \nSo it seems it was a bit trigger happy with task migrations, setting the kernel.sched_migration_cost to 5000000 (5ms) seemed to have resolved my woes. I'm yet to see locks climb and it's been running stable for a bit. This post was invaluable in explaining the cause -> http://www.postgresql.org/message-id/[email protected]\n# Postgres Kernel Tweakskernel.sched_migration_cost = 5000000\n# kernel.sched_autogroup_enabled = 0The second recommended setting 'sched_autogroup_enabled' is not available on the kernel I'm running but it doesn't seem to be a problem. \nAgain, thanks again for the help. It was seriously appreciated. Long night was long. \nIf things change and the problem pops up again I'll update you guys. \nCheers,Armand\nOn Tue, Apr 2, 2013 at 8:43 AM, Mark Kirkwood <[email protected]> wrote:\n\nAlso it is worth checking what your sysctl vm.zone_reclaim_mode is set to - if 1 then override to 0. As Jeff mentioned, this gotcha for larger cpu number machines has been discussed at length on this list - but still traps us now and again!\n\nCheers\n\nMark\n\nOn 02/04/13 19:33, Armand du Plessis wrote:\n\nI had my reservations about my almost 0% IO usage on the raid0 array as\nwell. I'm looking at the numbers in atop and it doesn't seem to reflect\nthe aggregate of the volumes as one would expect. I'm just happy I am\nseeing numbers on the volumes, they're not too bad.\n\nOne thing I was wondering, as a last possible IO resort. Provisioned EBS\nvolumes requires that you maintain a wait queue of 1 for every 200\nprovisioned IOPS to get reliable IO. My wait queue hovers between 0-1\nand with the 1000 IOPS it should be 5. Even thought about artificially\npushing more IO to the volumes but I think Jeff's right, there's some\ninternal kernel voodoo at play here. I have a feeling it'll be under\ncontrol with pg_pool (if I can just get the friggen setup there right)\nand then I'll have more time to dig into it deeper.\n\nApologies to the kittens for the interrupting your leave :)",
"msg_date": "Tue, 2 Apr 2013 10:16:57 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "Jumped the gun a bit. the problem still exists like before. But it's\ndefinitely on the right track, below is the output from top in the seconds\nbefore the cluster locks up. For some reason still insisting on moving\ntasks around despite bumping the sched_migration_cost cost up to 100ms.\n\n77 root RT 0 0 0 0 S 32.3 0.0 13:55.20 [migration/24]\n\n\n\n26512 postgres 20 0 8601m 7388 4992 R 32.3 0.0 0:02.17 postgres:\nother_user xxxx(52944) INSERT\n\n\n 38 root RT 0 0 0 0 S 31.3 0.0 17:26.15 [migration/11]\n\n\n 65 root RT 0 0\n 0 0 S 30.0 0.0 13:18.66 [migration/20]\n\n\n\n 62 root RT 0 0 0 0 S 29.7 0.0 12:58.81 [migration/19]\n\n\n\n 47 root RT 0 0 0 0 S 29.0 0.0 18:16.43 [migration/14]\n\n\n\n 29 root RT 0 0 0 0 S 28.7 0.0 25:21.47 [migration/8]\n\n\n\n 71 root RT 0 0 0 0 S 28.4 0.0 13:20.31 [migration/22]\n\n\n\n 95 root RT 0 0 0 0 S 23.8 0.0 13:37.31 [migration/30]\n\n\n\n26518 postgres 20 0 8601m 9684 5228 S 21.2 0.0 0:01.89 postgres:\nother_user xxxxx(52954) INSERT\n\n\n 6 root RT 0 0 0 0 S 20.5 0.0 39:17.72 [migration/0]\n\n\n\n 41 root RT 0 0 0 0 S 19.6 0.0 18:21.36 [migration/12]\n\n\n\n 68 root RT 0 0 0 0 S 19.6 0.0 13:04.62 [migration/21]\n\n\n\n 74 root RT 0 0 0 0 S 18.9 0.0 13:39.41 [migration/23]\n\n\n\n 305 root 20 0 0 0 0 S 18.3 0.0 11:34.52 [kworker/27:1]\n\n\n\n 44 root RT 0 0 0 0 S 17.0 0.0 18:30.71 [migration/13]\n\n\n\n 89 root RT 0 0 0 0 S 16.0 0.0 12:13.42 [migration/28]\n\n\n\n 7 root RT 0 0 0 0 S 15.3 0.0 21:58.56 [migration/1]\n\n\n\n 35 root RT 0 0 0 0 S 15.3 0.0 20:02.05 [migration/10]\n\n\n\n 53 root RT 0 0 0 0 S 14.0 0.0 12:51.46 [migration/16]\n\n\n\n11254 root 0 -20 21848 7532 2788 S 11.7 0.0 22:35.66 atop 1\n\n\n\n 14 root RT 0 0 0 0 S 10.8 0.0 19:36.56 [migration/3]\n\n\n\n26463 postgres 20 0 8601m 7492 5100 R 10.8 0.0 0:00.33 postgres:\nother_user xxxxx(32835) INSERT\n\n\n 32 root RT 0 0 0 0 S 10.1 0.0 20:46.18 [migration/9]\n\n\n\n16793 root 20 0 0 0 0 S 6.5 0.0 1:12.72 [kworker/25:0]\n\n\n\n 20 root RT 0 0 0 0 S 5.5 0.0 18:51.81 [migration/5]\n\n\n\n 48 root 20 0 0 0 0 S 5.5 0.0 3:52.93 [kworker/14:0]\n\n\n\nOn Tue, Apr 2, 2013 at 10:16 AM, Armand du Plessis <[email protected]> wrote:\n\n> Touch wood but I think I found the problem thanks to these pointers. I\n> checked the vm.zone_reclaim_mode and mine was set to 0. However just\n> before the locking starts I can see many of my CPUs flashing red and jump\n> to high percentage sys usage. When I look at top it's the migration kernel\n> tasks that seem to trigger it.\n>\n> So it seems it was a bit trigger happy with task migrations, setting the kernel.sched_migration_cost\n> to 5000000 (5ms) seemed to have resolved my woes. I'm yet to see locks\n> climb and it's been running stable for a bit. This post was invaluable in\n> explaining the cause ->\n> http://www.postgresql.org/message-id/[email protected]\n>\n> # Postgres Kernel Tweaks\n> kernel.sched_migration_cost = 5000000\n> # kernel.sched_autogroup_enabled = 0\n>\n> The second recommended setting 'sched_autogroup_enabled' is not available\n> on the kernel I'm running but it doesn't seem to be a problem.\n>\n> Again, thanks again for the help. It was seriously appreciated. Long night\n> was long.\n>\n> If things change and the problem pops up again I'll update you guys.\n>\n> Cheers,\n>\n> Armand\n>\n>\n> On Tue, Apr 2, 2013 at 8:43 AM, Mark Kirkwood <\n> [email protected]> wrote:\n>\n>> Also it is worth checking what your sysctl vm.zone_reclaim_mode is set to\n>> - if 1 then override to 0. As Jeff mentioned, this gotcha for larger cpu\n>> number machines has been discussed at length on this list - but still traps\n>> us now and again!\n>>\n>> Cheers\n>>\n>> Mark\n>\n>\n\nJumped the gun a bit. the problem still exists like before. But it's definitely on the right track, below is the output from top in the seconds before the cluster locks up. For some reason still insisting on moving tasks around despite bumping the sched_migration_cost cost up to 100ms. \n77 root RT 0 0 0 0 S 32.3 0.0 13:55.20 [migration/24] \n26512 postgres 20 0 8601m 7388 4992 R 32.3 0.0 0:02.17 postgres: other_user xxxx(52944) INSERT \n 38 root RT 0 0 0 0 S 31.3 0.0 17:26.15 [migration/11] 65 root RT 0 0 0 0 S 30.0 0.0 13:18.66 [migration/20] \n 62 root RT 0 0 0 0 S 29.7 0.0 12:58.81 [migration/19] \n 47 root RT 0 0 0 0 S 29.0 0.0 18:16.43 [migration/14] \n 29 root RT 0 0 0 0 S 28.7 0.0 25:21.47 [migration/8] \n 71 root RT 0 0 0 0 S 28.4 0.0 13:20.31 [migration/22] \n 95 root RT 0 0 0 0 S 23.8 0.0 13:37.31 [migration/30] \n26518 postgres 20 0 8601m 9684 5228 S 21.2 0.0 0:01.89 postgres: other_user xxxxx(52954) INSERT \n 6 root RT 0 0 0 0 S 20.5 0.0 39:17.72 [migration/0] \n 41 root RT 0 0 0 0 S 19.6 0.0 18:21.36 [migration/12] \n 68 root RT 0 0 0 0 S 19.6 0.0 13:04.62 [migration/21] \n 74 root RT 0 0 0 0 S 18.9 0.0 13:39.41 [migration/23] \n 305 root 20 0 0 0 0 S 18.3 0.0 11:34.52 [kworker/27:1] \n 44 root RT 0 0 0 0 S 17.0 0.0 18:30.71 [migration/13] \n 89 root RT 0 0 0 0 S 16.0 0.0 12:13.42 [migration/28] \n 7 root RT 0 0 0 0 S 15.3 0.0 21:58.56 [migration/1] \n 35 root RT 0 0 0 0 S 15.3 0.0 20:02.05 [migration/10] \n 53 root RT 0 0 0 0 S 14.0 0.0 12:51.46 [migration/16] \n11254 root 0 -20 21848 7532 2788 S 11.7 0.0 22:35.66 atop 1 \n 14 root RT 0 0 0 0 S 10.8 0.0 19:36.56 [migration/3] \n26463 postgres 20 0 8601m 7492 5100 R 10.8 0.0 0:00.33 postgres: other_user xxxxx(32835) INSERT \n 32 root RT 0 0 0 0 S 10.1 0.0 20:46.18 [migration/9] \n16793 root 20 0 0 0 0 S 6.5 0.0 1:12.72 [kworker/25:0] \n 20 root RT 0 0 0 0 S 5.5 0.0 18:51.81 [migration/5] \n 48 root 20 0 0 0 0 S 5.5 0.0 3:52.93 [kworker/14:0] On Tue, Apr 2, 2013 at 10:16 AM, Armand du Plessis <[email protected]> wrote:\nTouch wood but I think I found the problem thanks to these pointers. I checked the vm.zone_reclaim_mode and mine was set to 0. However just before the locking starts I can see many of my CPUs flashing red and jump to high percentage sys usage. When I look at top it's the migration kernel tasks that seem to trigger it. \nSo it seems it was a bit trigger happy with task migrations, setting the kernel.sched_migration_cost to 5000000 (5ms) seemed to have resolved my woes. I'm yet to see locks climb and it's been running stable for a bit. This post was invaluable in explaining the cause -> http://www.postgresql.org/message-id/[email protected]\n# Postgres Kernel Tweakskernel.sched_migration_cost = 5000000\n# kernel.sched_autogroup_enabled = 0The second recommended setting 'sched_autogroup_enabled' is not available on the kernel I'm running but it doesn't seem to be a problem. \nAgain, thanks again for the help. It was seriously appreciated. Long night was long. \nIf things change and the problem pops up again I'll update you guys. \nCheers,Armand\nOn Tue, Apr 2, 2013 at 8:43 AM, Mark Kirkwood <[email protected]> wrote:\r\n\r\n\r\nAlso it is worth checking what your sysctl vm.zone_reclaim_mode is set to - if 1 then override to 0. As Jeff mentioned, this gotcha for larger cpu number machines has been discussed at length on this list - but still traps us now and again!\n\r\nCheers\n\r\nMark",
"msg_date": "Tue, 2 Apr 2013 11:55:14 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_locks explosion"
},
{
"msg_contents": "It's now running as expected, I made a few other tweaks to get it to an\noperational state again. So just for closure on this dark period below some\nnotes.\n\nThere was two triggers that caused the almost instant backlog of locks. As\nsuspected the one was scheduler that caused endless problems whenever it\nstarted migrating tasks. This would lock up the database (and server) for a\nsecond or more after which a few thousand locks existed.\n\nSetting the kernel.sched_migration_cost to 5ms didn't have the desired\neffect. The scheduler would still stop the world and break down for a few\nseconds. After much anguish and research (this is a pretty good explanation\nof the scheduler tunables\nhttp://events.linuxfoundation.org/slides/2011/linuxcon/lcna2011_rajan.pdf)\nI adjusted :\n\nsysctl -w kernel.sched_min_granularity_ns=9000000\nsysctl -w kernel.sched_wakeup_granularity_ns=12000000\n\nSince then I haven't had an interruption from migration.\n\nI also switched hugepage to not be as aggressive, it was also intrusive and\nmy Postgres not configured to use it. \"echo madvise >\n/sys/kernel/mm/transparent_hugepage/defrag\"\n\nAfter these changes things started smoothing out and running like it\nshould. I also found that if you are running on striped EBS volumes you\nshould really try and get them busy to get consistent performance. I\nchecked with Amazon and the usage I see on the individual modules were\ncorrect. (Not the RAID figure but the numbers on the individual volumes,\nthey were idling)\n\nThanks again for all the help and suggestions.\n\nArmand\n\n\nOn Tue, Apr 2, 2013 at 11:55 AM, Armand du Plessis <[email protected]> wrote:\n\n> Jumped the gun a bit. the problem still exists like before. But it's\n> definitely on the right track, below is the output from top in the seconds\n> before the cluster locks up. For some reason still insisting on moving\n> tasks around despite bumping the sched_migration_cost cost up to 100ms.\n>\n> 77 root RT 0 0 0 0 S 32.3 0.0 13:55.20 [migration/24]\n>\n>\n>\n> 26512 postgres 20 0 8601m 7388 4992 R 32.3 0.0 0:02.17 postgres:\n> other_user xxxx(52944) INSERT\n>\n>\n> 38 root RT 0 0 0 0 S 31.3 0.0 17:26.15\n> [migration/11]\n>\n> 65 root\n> RT 0 0 0 0 S 30.0 0.0 13:18.66 [migration/20]\n>\n>\n>\n> 62 root RT 0 0 0 0 S 29.7 0.0 12:58.81\n> [migration/19]\n>\n>\n> 47 root RT 0 0 0 0 S 29.0 0.0 18:16.43\n> [migration/14]\n>\n>\n> 29 root RT 0 0 0 0 S 28.7 0.0 25:21.47 [migration/8]\n>\n>\n>\n> 71 root RT 0 0 0 0 S 28.4 0.0 13:20.31\n> [migration/22]\n>\n>\n> 95 root RT 0 0 0 0 S 23.8 0.0 13:37.31\n> [migration/30]\n>\n>\n> 26518 postgres 20 0 8601m 9684 5228 S 21.2 0.0 0:01.89 postgres:\n> other_user xxxxx(52954) INSERT\n>\n>\n> 6 root RT 0 0 0 0 S 20.5 0.0 39:17.72 [migration/0]\n>\n>\n>\n> 41 root RT 0 0 0 0 S 19.6 0.0 18:21.36\n> [migration/12]\n>\n>\n> 68 root RT 0 0 0 0 S 19.6 0.0 13:04.62\n> [migration/21]\n>\n>\n> 74 root RT 0 0 0 0 S 18.9 0.0 13:39.41\n> [migration/23]\n>\n>\n> 305 root 20 0 0 0 0 S 18.3 0.0 11:34.52\n> [kworker/27:1]\n>\n>\n> 44 root RT 0 0 0 0 S 17.0 0.0 18:30.71\n> [migration/13]\n>\n>\n> 89 root RT 0 0 0 0 S 16.0 0.0 12:13.42\n> [migration/28]\n>\n>\n> 7 root RT 0 0 0 0 S 15.3 0.0 21:58.56 [migration/1]\n>\n>\n>\n> 35 root RT 0 0 0 0 S 15.3 0.0 20:02.05\n> [migration/10]\n>\n>\n> 53 root RT 0 0 0 0 S 14.0 0.0 12:51.46\n> [migration/16]\n>\n>\n> 11254 root 0 -20 21848 7532 2788 S 11.7 0.0 22:35.66 atop 1\n>\n>\n>\n> 14 root RT 0 0 0 0 S 10.8 0.0 19:36.56 [migration/3]\n>\n>\n>\n> 26463 postgres 20 0 8601m 7492 5100 R 10.8 0.0 0:00.33 postgres:\n> other_user xxxxx(32835) INSERT\n>\n>\n> 32 root RT 0 0 0 0 S 10.1 0.0 20:46.18 [migration/9]\n>\n>\n>\n> 16793 root 20 0 0 0 0 S 6.5 0.0 1:12.72\n> [kworker/25:0]\n>\n>\n> 20 root RT 0 0 0 0 S 5.5 0.0 18:51.81 [migration/5]\n>\n>\n>\n> 48 root 20 0 0 0 0 S 5.5 0.0 3:52.93\n> [kworker/14:0]\n>\n>\n> On Tue, Apr 2, 2013 at 10:16 AM, Armand du Plessis <[email protected]> wrote:\n>\n>> Touch wood but I think I found the problem thanks to these pointers. I\n>> checked the vm.zone_reclaim_mode and mine was set to 0. However just\n>> before the locking starts I can see many of my CPUs flashing red and jump\n>> to high percentage sys usage. When I look at top it's the migration kernel\n>> tasks that seem to trigger it.\n>>\n>> So it seems it was a bit trigger happy with task migrations, setting the kernel.sched_migration_cost\n>> to 5000000 (5ms) seemed to have resolved my woes. I'm yet to see locks\n>> climb and it's been running stable for a bit. This post was invaluable in\n>> explaining the cause ->\n>> http://www.postgresql.org/message-id/[email protected]\n>>\n>> # Postgres Kernel Tweaks\n>> kernel.sched_migration_cost = 5000000\n>> # kernel.sched_autogroup_enabled = 0\n>>\n>> The second recommended setting 'sched_autogroup_enabled' is not\n>> available on the kernel I'm running but it doesn't seem to be a problem.\n>>\n>> Again, thanks again for the help. It was seriously appreciated. Long\n>> night was long.\n>>\n>> If things change and the problem pops up again I'll update you guys.\n>>\n>> Cheers,\n>>\n>> Armand\n>>\n>>\n>> On Tue, Apr 2, 2013 at 8:43 AM, Mark Kirkwood <\n>> [email protected]> wrote:\n>>\n>>> Also it is worth checking what your sysctl vm.zone_reclaim_mode is set\n>>> to - if 1 then override to 0. As Jeff mentioned, this gotcha for larger cpu\n>>> number machines has been discussed at length on this list - but still traps\n>>> us now and again!\n>>>\n>>> Cheers\n>>>\n>>> Mark\n>>\n>>\n\nIt's now running as expected, I made a few other tweaks to get it to an operational state again. So just for closure on this dark period below some notes. There was two triggers that caused the almost instant backlog of locks. As suspected the one was scheduler that caused endless problems whenever it started migrating tasks. This would lock up the database (and server) for a second or more after which a few thousand locks existed. \nSetting the kernel.sched_migration_cost to 5ms didn't have the desired effect. The scheduler would still stop the world and break down for a few seconds. After much anguish and research (this is a pretty good explanation of the scheduler tunables http://events.linuxfoundation.org/slides/2011/linuxcon/lcna2011_rajan.pdf) I adjusted :\nsysctl -w kernel.sched_min_granularity_ns=9000000sysctl -w kernel.sched_wakeup_granularity_ns=12000000Since then I haven't had an interruption from migration. \nI also switched hugepage to not be as aggressive, it was also intrusive and my Postgres not configured to use it. \"echo madvise > /sys/kernel/mm/transparent_hugepage/defrag\"\nAfter these changes things started smoothing out and running like it should. I also found that if you are running on striped EBS volumes you should really try and get them busy to get consistent performance. I checked with Amazon and the usage I see on the individual modules were correct. (Not the RAID figure but the numbers on the individual volumes, they were idling)\nThanks again for all the help and suggestions. \nArmand\r\n\r\nOn Tue, Apr 2, 2013 at 11:55 AM, Armand du Plessis <[email protected]> wrote:\nJumped the gun a bit. the problem still exists like before. But it's definitely on the right track, below is the output from top in the seconds before the cluster locks up. For some reason still insisting on moving tasks around despite bumping the sched_migration_cost cost up to 100ms. \n77 root RT 0 0 0 0 S 32.3 0.0 13:55.20 [migration/24] \n26512 postgres 20 0 8601m 7388 4992 R 32.3 0.0 0:02.17 postgres: other_user xxxx(52944) INSERT \n 38 root RT 0 0 0 0 S 31.3 0.0 17:26.15 [migration/11] 65 root RT 0 0 0 0 S 30.0 0.0 13:18.66 [migration/20] \n 62 root RT 0 0 0 0 S 29.7 0.0 12:58.81 [migration/19] \n 47 root RT 0 0 0 0 S 29.0 0.0 18:16.43 [migration/14] \n 29 root RT 0 0 0 0 S 28.7 0.0 25:21.47 [migration/8] \n 71 root RT 0 0 0 0 S 28.4 0.0 13:20.31 [migration/22] \n 95 root RT 0 0 0 0 S 23.8 0.0 13:37.31 [migration/30] \n26518 postgres 20 0 8601m 9684 5228 S 21.2 0.0 0:01.89 postgres: other_user xxxxx(52954) INSERT \n 6 root RT 0 0 0 0 S 20.5 0.0 39:17.72 [migration/0] \n 41 root RT 0 0 0 0 S 19.6 0.0 18:21.36 [migration/12] \n 68 root RT 0 0 0 0 S 19.6 0.0 13:04.62 [migration/21] \n 74 root RT 0 0 0 0 S 18.9 0.0 13:39.41 [migration/23] \n 305 root 20 0 0 0 0 S 18.3 0.0 11:34.52 [kworker/27:1] \n 44 root RT 0 0 0 0 S 17.0 0.0 18:30.71 [migration/13] \n 89 root RT 0 0 0 0 S 16.0 0.0 12:13.42 [migration/28] \n 7 root RT 0 0 0 0 S 15.3 0.0 21:58.56 [migration/1] \n 35 root RT 0 0 0 0 S 15.3 0.0 20:02.05 [migration/10] \n 53 root RT 0 0 0 0 S 14.0 0.0 12:51.46 [migration/16] \n11254 root 0 -20 21848 7532 2788 S 11.7 0.0 22:35.66 atop 1 \n 14 root RT 0 0 0 0 S 10.8 0.0 19:36.56 [migration/3] \n26463 postgres 20 0 8601m 7492 5100 R 10.8 0.0 0:00.33 postgres: other_user xxxxx(32835) INSERT \n 32 root RT 0 0 0 0 S 10.1 0.0 20:46.18 [migration/9] \n16793 root 20 0 0 0 0 S 6.5 0.0 1:12.72 [kworker/25:0] \n 20 root RT 0 0 0 0 S 5.5 0.0 18:51.81 [migration/5] \n 48 root 20 0 0 0 0 S 5.5 0.0 3:52.93 [kworker/14:0] On Tue, Apr 2, 2013 at 10:16 AM, Armand du Plessis <[email protected]> wrote:\nTouch wood but I think I found the problem thanks to these pointers. I checked the vm.zone_reclaim_mode and mine was set to 0. However just before the locking starts I can see many of my CPUs flashing red and jump to high percentage sys usage. When I look at top it's the migration kernel tasks that seem to trigger it. \nSo it seems it was a bit trigger happy with task migrations, setting the kernel.sched_migration_cost to 5000000 (5ms) seemed to have resolved my woes. I'm yet to see locks climb and it's been running stable for a bit. This post was invaluable in explaining the cause -> http://www.postgresql.org/message-id/[email protected]\n# Postgres Kernel Tweakskernel.sched_migration_cost = 5000000\n# kernel.sched_autogroup_enabled = 0The second recommended setting 'sched_autogroup_enabled' is not available on the kernel I'm running but it doesn't seem to be a problem. \nAgain, thanks again for the help. It was seriously appreciated. Long night was long. \nIf things change and the problem pops up again I'll update you guys. \nCheers,Armand\nOn Tue, Apr 2, 2013 at 8:43 AM, Mark Kirkwood <[email protected]> wrote:\r\n\r\n\r\n\r\nAlso it is worth checking what your sysctl vm.zone_reclaim_mode is set to - if 1 then override to 0. As Jeff mentioned, this gotcha for larger cpu number machines has been discussed at length on this list - but still traps us now and again!\n\r\nCheers\n\r\nMark",
"msg_date": "Tue, 2 Apr 2013 20:16:03 +0200",
"msg_from": "Armand du Plessis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problems with pg_locks explosion"
}
] |
[
{
"msg_contents": "So It was announced that there would be a security patch for all versions\nreleased on the 4th. I see it's been announced/released on the website, but\nthe versions available show Feb dates.\n\nShould the source be current? Or does it take a while for source and other\nto be made available?\n\nFigured if the site says released, it should be available.\n\nThanks\nTory\n\n[image: postgresql-9.2.3.tar.bz2]<http://ftp.postgresql.org/pub/source/v9.2.3/postgresql-9.2.3.tar.bz2>\n postgresql-9.2.3.tar.bz2<http://ftp.postgresql.org/pub/source/v9.2.3/postgresql-9.2.3.tar.bz2>2013-02-07\n10:25:1015.6 MB [image:\npostgresql-9.2.3.tar.bz2.md5]<http://ftp.postgresql.org/pub/source/v9.2.3/postgresql-9.2.3.tar.bz2.md5>\n postgresql-9.2.3.tar.bz2.md5<http://ftp.postgresql.org/pub/source/v9.2.3/postgresql-9.2.3.tar.bz2.md5>2013-02-07\n10:25:1059 bytes [image:\npostgresql-9.2.3.tar.gz]<http://ftp.postgresql.org/pub/source/v9.2.3/postgresql-9.2.3.tar.gz>\n postgresql-9.2.3.tar.gz<http://ftp.postgresql.org/pub/source/v9.2.3/postgresql-9.2.3.tar.gz>2013-02-07\n10:25:1220.5 MB [image:\npostgresql-9.2.3.tar.gz.md5]<http://ftp.postgresql.org/pub/source/v9.2.3/postgresql-9.2.3.tar.gz.md5>\n postgresql-9.2.3.tar.gz.md5<http://ftp.postgresql.org/pub/source/v9.2.3/postgresql-9.2.3.tar.gz.md5>2013-02-07\n10:25:1358 bytes\n\nSo It was announced that there would be a security patch for all versions released on the 4th. I see it's been announced/released on the website, but the versions available show Feb dates.\nShould the source be current? Or does it take a while for source and other to be made available?Figured if the site says released, it should be available.ThanksTory\n postgresql-9.2.3.tar.bz2\n2013-02-07 10:25:1015.6 MB\n postgresql-9.2.3.tar.bz2.md5\n2013-02-07 10:25:1059 bytes\n postgresql-9.2.3.tar.gz\n2013-02-07 10:25:1220.5 MB\n postgresql-9.2.3.tar.gz.md5\n2013-02-07 10:25:1358 bytes",
"msg_date": "Mon, 1 Apr 2013 17:10:22 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres upgrade, security release, where?"
},
{
"msg_contents": "On Mon, Apr 1, 2013 at 05:10:22PM -0700, Tory M Blue wrote:\n> So It was announced that there would be a security patch for all versions\n> released on the 4th. I see it's been announced/released on the website, but the\n> versions available show Feb dates.\n> \n> Should the source be current? Or does it take a while for source and other to\n> be made available?\n> \n> Figured if the site says released, it should be available.\n> \n> Thanks\n> Tory\n> \n> postgresql-9.2.3.tar.bz2 2013-02-07 15.6\n> postgresql-9.2.3.tar.bz2 10:25:10 MB\n> postgresql-9.2.3.tar.bz2.md5 2013-02-07 59\n> postgresql-9.2.3.tar.bz2.md5 10:25:10 bytes\n> postgresql-9.2.3.tar.gz postgresql-9.2.3.tar.gz 2013-02-07 20.5\n> 10:25:12 MB\n> postgresql-9.2.3.tar.gz.md5 2013-02-07 58\n> postgresql-9.2.3.tar.gz.md5 10:25:13 bytes\n\nDue to the security nature of the release, the source and binaries will\nonly be publicly available on April 4 --- there are no pre-release\nversions available.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 1 Apr 2013 20:27:06 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres upgrade, security release, where?"
},
{
"msg_contents": "2013/4/2 Bruce Momjian <[email protected]>:\n> On Mon, Apr 1, 2013 at 05:10:22PM -0700, Tory M Blue wrote:\n>> So It was announced that there would be a security patch for all versions\n>> released on the 4th. I see it's been announced/released on the website, but the\n>> versions available show Feb dates.\n>>\n>> Should the source be current? Or does it take a while for source and other to\n>> be made available?\n>>\n>> Figured if the site says released, it should be available.\n>>\n>> Thanks\n>> Tory\n>>\n>> postgresql-9.2.3.tar.bz2 2013-02-07 15.6\n>> postgresql-9.2.3.tar.bz2 10:25:10 MB\n>> postgresql-9.2.3.tar.bz2.md5 2013-02-07 59\n>> postgresql-9.2.3.tar.bz2.md5 10:25:10 bytes\n>> postgresql-9.2.3.tar.gz postgresql-9.2.3.tar.gz 2013-02-07 20.5\n>> 10:25:12 MB\n>> postgresql-9.2.3.tar.gz.md5 2013-02-07 58\n>> postgresql-9.2.3.tar.gz.md5 10:25:13 bytes\n>\n> Due to the security nature of the release, the source and binaries will\n> only be publicly available on April 4 --- there are no pre-release\n> versions available.\n\nThe PostgreSQL homepage has a big announcement saying\n\"PostgreSQL minor versions released!\", including a mention of a\n\"security issue\";\nunfortunately it's not obvious that this is for the prior 9.2.3 release and as\nthe announcement of the upcoming security release\n( http://www.postgresql.org/about/news/1454/ ) does not mention the\nnew release number, methinks there is plenty of room for confusion :(\n\nIt might be an idea to update the \"splash box\" with details of the upcoming\nrelease.\n\nRegards\n\n\nIan Barwick\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 2 Apr 2013 09:40:07 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres upgrade, security release, where?"
},
{
"msg_contents": "On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:\n> > Due to the security nature of the release, the source and binaries will\n> > only be publicly available on April 4 --- there are no pre-release\n> > versions available.\n> \n> The PostgreSQL homepage has a big announcement saying\n> \"PostgreSQL minor versions released!\", including a mention of a\n> \"security issue\";\n> unfortunately it's not obvious that this is for the prior 9.2.3 release and as\n> the announcement of the upcoming security release\n> ( http://www.postgresql.org/about/news/1454/ ) does not mention the\n> new release number, methinks there is plenty of room for confusion :(\n> \n> It might be an idea to update the \"splash box\" with details of the upcoming\n> release.\n\nI agree updating the \"spash box\" would make sense.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 1 Apr 2013 20:55:26 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres upgrade, security release, where?"
},
{
"msg_contents": "On Mon, Apr 1, 2013 at 5:55 PM, Bruce Momjian <[email protected]> wrote:\n\n> On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:\n> > > Due to the security nature of the release, the source and binaries will\n> > > only be publicly available on April 4 --- there are no pre-release\n> > > versions available.\n> >\n> > The PostgreSQL homepage has a big announcement saying\n> > \"PostgreSQL minor versions released!\", including a mention of a\n> > \"security issue\";\n> > unfortunately it's not obvious that this is for the prior 9.2.3 release\n> and as\n> > the announcement of the upcoming security release\n> > ( http://www.postgresql.org/about/news/1454/ ) does not mention the\n> > new release number, methinks there is plenty of room for confusion :(\n> >\n> > It might be an idea to update the \"splash box\" with details of the\n> upcoming\n> > release.\n>\n> >I agree updating the \"spash box\" would make sense.\n>\n\nThanks all\n\nMy confusion was due to the fact that the other day there was a splash box\nor other indication regarding the security fix release of April 4th and\nwhen I went back today (just because), the message had changed citing there\nwas a security fix etc and no mention of a major fix coming in a few days.\n\nMy apologies for the confusion\n\nTory\n\nOn Mon, Apr 1, 2013 at 5:55 PM, Bruce Momjian <[email protected]> wrote:\nOn Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:\n> > Due to the security nature of the release, the source and binaries will\n> > only be publicly available on April 4 --- there are no pre-release\n> > versions available.\n>\n> The PostgreSQL homepage has a big announcement saying\n> \"PostgreSQL minor versions released!\", including a mention of a\n> \"security issue\";\n> unfortunately it's not obvious that this is for the prior 9.2.3 release and as\n> the announcement of the upcoming security release\n> ( http://www.postgresql.org/about/news/1454/ ) does not mention the\n> new release number, methinks there is plenty of room for confusion :(\n>\n> It might be an idea to update the \"splash box\" with details of the upcoming\n> release.\n\n>I agree updating the \"spash box\" would make sense.\nThanks allMy confusion was due to the fact that the other day there was a splash box or other indication regarding the security fix release of April 4th and when I went back today (just because), the message had changed citing there was a security fix etc and no mention of a major fix coming in a few days.\nMy apologies for the confusionTory",
"msg_date": "Mon, 1 Apr 2013 20:35:54 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres upgrade, security release, where?"
},
{
"msg_contents": "On 02/04/13 13:55, Bruce Momjian wrote:\n> On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:\n>>> Due to the security nature of the release, the source and binaries will\n>>> only be publicly available on April 4 --- there are no pre-release\n>>> versions available.\n>>\n>> The PostgreSQL homepage has a big announcement saying\n>> \"PostgreSQL minor versions released!\", including a mention of a\n>> \"security issue\";\n>> unfortunately it's not obvious that this is for the prior 9.2.3 release and as\n>> the announcement of the upcoming security release\n>> ( http://www.postgresql.org/about/news/1454/ ) does not mention the\n>> new release number, methinks there is plenty of room for confusion :(\n>>\n>> It might be an idea to update the \"splash box\" with details of the upcoming\n>> release.\n>\n> I agree updating the \"spash box\" would make sense.\n>\n\nOr perhaps include a date on said splashes, so we know when to panic :-)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 02 Apr 2013 16:43:48 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres upgrade, security release, where?"
},
{
"msg_contents": "On Mon, Apr 1, 2013 at 11:43 PM, Mark Kirkwood\n<[email protected]> wrote:\n> On 02/04/13 13:55, Bruce Momjian wrote:\n>>\n>> On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:\n>>>>\n>>>> Due to the security nature of the release, the source and binaries will\n>>>> only be publicly available on April 4 --- there are no pre-release\n>>>> versions available.\n>>>\n>>>\n>>> The PostgreSQL homepage has a big announcement saying\n>>> \"PostgreSQL minor versions released!\", including a mention of a\n>>> \"security issue\";\n>>> unfortunately it's not obvious that this is for the prior 9.2.3 release\n>>> and as\n>>> the announcement of the upcoming security release\n>>> ( http://www.postgresql.org/about/news/1454/ ) does not mention the\n>>> new release number, methinks there is plenty of room for confusion :(\n>>>\n>>> It might be an idea to update the \"splash box\" with details of the\n>>> upcoming\n>>> release.\n>>\n>>\n>> I agree updating the \"spash box\" would make sense.\n>>\n>\n> Or perhaps include a date on said splashes, so we know when to panic :-)\n\nI've added the date to the splash. You can cease panicing now :-)\n\n--\nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 2 Apr 2013 04:34:39 -0400",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres upgrade, security release, where?"
},
{
"msg_contents": "On 02/04/13 21:34, Dave Page wrote:\n> On Mon, Apr 1, 2013 at 11:43 PM, Mark Kirkwood\n> <[email protected]> wrote:\n>> On 02/04/13 13:55, Bruce Momjian wrote:\n>>>\n>>> On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:\n>>>>>\n>>>>> Due to the security nature of the release, the source and binaries will\n>>>>> only be publicly available on April 4 --- there are no pre-release\n>>>>> versions available.\n>>>>\n>>>>\n>>>> The PostgreSQL homepage has a big announcement saying\n>>>> \"PostgreSQL minor versions released!\", including a mention of a\n>>>> \"security issue\";\n>>>> unfortunately it's not obvious that this is for the prior 9.2.3 release\n>>>> and as\n>>>> the announcement of the upcoming security release\n>>>> ( http://www.postgresql.org/about/news/1454/ ) does not mention the\n>>>> new release number, methinks there is plenty of room for confusion :(\n>>>>\n>>>> It might be an idea to update the \"splash box\" with details of the\n>>>> upcoming\n>>>> release.\n>>>\n>>>\n>>> I agree updating the \"spash box\" would make sense.\n>>>\n>>\n>> Or perhaps include a date on said splashes, so we know when to panic :-)\n>\n> I've added the date to the splash. You can cease panicing now :-)\n>\n\n...wipes forehead...\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 02 Apr 2013 21:47:21 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres upgrade, security release, where?"
},
{
"msg_contents": "On 02/04/13 21:47, Mark Kirkwood wrote:\n> On 02/04/13 21:34, Dave Page wrote:\n>> On Mon, Apr 1, 2013 at 11:43 PM, Mark Kirkwood\n>> <[email protected]> wrote:\n>>> On 02/04/13 13:55, Bruce Momjian wrote:\n>>>>\n>>>> On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:\n>>>>>>\n>>>>>> Due to the security nature of the release, the source and binaries\n>>>>>> will\n>>>>>> only be publicly available on April 4 --- there are no pre-release\n>>>>>> versions available.\n>>>>>\n>>>>>\n>>>>> The PostgreSQL homepage has a big announcement saying\n>>>>> \"PostgreSQL minor versions released!\", including a mention of a\n>>>>> \"security issue\";\n>>>>> unfortunately it's not obvious that this is for the prior 9.2.3\n>>>>> release\n>>>>> and as\n>>>>> the announcement of the upcoming security release\n>>>>> ( http://www.postgresql.org/about/news/1454/ ) does not mention the\n>>>>> new release number, methinks there is plenty of room for confusion :(\n>>>>>\n>>>>> It might be an idea to update the \"splash box\" with details of the\n>>>>> upcoming\n>>>>> release.\n>>>>\n>>>>\n>>>> I agree updating the \"spash box\" would make sense.\n>>>>\n>>>\n>>> Or perhaps include a date on said splashes, so we know when to panic :-)\n>>\n>> I've added the date to the splash. You can cease panicing now :-)\n>>\n>\n> ...wipes forehead...\n>\n\nNice - but at the risk of seeming ungrateful, it would be good to know \nwhat timezone said date referred to...in case people were waiting on an \nimportant announcement or something... :-)\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 04 Apr 2013 21:52:04 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres upgrade, security release, where?"
},
{
"msg_contents": "2013/4/4 Mark Kirkwood <[email protected]>:\n> On 02/04/13 21:47, Mark Kirkwood wrote:\n>>\n>> On 02/04/13 21:34, Dave Page wrote:\n>>>\n>>> On Mon, Apr 1, 2013 at 11:43 PM, Mark Kirkwood\n>>> <[email protected]> wrote:\n>>>>\n>>>> On 02/04/13 13:55, Bruce Momjian wrote:\n>>>>>\n>>>>>\n>>>>> On Tue, Apr 2, 2013 at 09:40:07AM +0900, Ian Lawrence Barwick wrote:\n>>>>>>>\n>>>>>>>\n>>>>>>> Due to the security nature of the release, the source and binaries\n>>>>>>> will\n>>>>>>> only be publicly available on April 4 --- there are no pre-release\n>>>>>>> versions available.\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> The PostgreSQL homepage has a big announcement saying\n>>>>>> \"PostgreSQL minor versions released!\", including a mention of a\n>>>>>> \"security issue\";\n>>>>>> unfortunately it's not obvious that this is for the prior 9.2.3\n>>>>>> release\n>>>>>> and as\n>>>>>> the announcement of the upcoming security release\n>>>>>> ( http://www.postgresql.org/about/news/1454/ ) does not mention the\n>>>>>> new release number, methinks there is plenty of room for confusion :(\n>>>>>>\n>>>>>> It might be an idea to update the \"splash box\" with details of the\n>>>>>> upcoming\n>>>>>> release.\n>>>>>\n>>>>>\n>>>>>\n>>>>> I agree updating the \"spash box\" would make sense.\n>>>>>\n>>>>\n>>>> Or perhaps include a date on said splashes, so we know when to panic :-)\n>>>\n>>>\n>>> I've added the date to the splash. You can cease panicing now :-)\n>>>\n>>\n>> ...wipes forehead...\n>>\n>\n> Nice - but at the risk of seeming ungrateful, it would be good to know what\n> timezone said date referred to...in case people were waiting on an important\n> announcement or something... :-)\n\nI'm guessing somewhere around the start of the business day US time on their\neast coast? Which means a late night for those of us on the early side of\nthe International Date Line (I'm in Japan). I'll want to at least find out what\nthe nature of the problem is before deciding whether I need to burn some\nlate-nite oil...\n\nRegards\n\nIan Barwick\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 4 Apr 2013 18:11:34 +0900",
"msg_from": "Ian Lawrence Barwick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres upgrade, security release, where?"
}
] |
[
{
"msg_contents": "Hi everybody,\n\nin a project I have a performance problem, which I (and my colleagues) don't understand. It's a simple join between 2 of 3 tables:\n\ntable-1: user (id, user_name, ...). This table has about 1 million rows (999673 rows)\ntable-2: competition (57 rows)\ntable-3: user_2_competition. A relation between user and competition. This table has about 100.000 rows\n\nThe query is a join between table user_2_competition and user and looks like this:\n\nselect u.id, u.user_name\nfrom user_2_competition uc \n left join \"user\" u on u.id = uc.user_id \nwhere uc.competition_id = '3cc1cb9b3ac132ad013ad01316040001'\n\nThe query returns the ID and user_name of all users participating in a competition.\n\nWhat I don't understand: This query executes a sequential scan on user!\n\n\nThe tables have the following indexes:\n\nuser_2_competition: there is an index on user_id and an index on competition_id (competition_id is a VARCHAR(32) containing UUIDs)\nuser: id is the primary key and has therefore a unique index (the ID is a VARCHAR(32), which contains UUIDs).\n\nThe database has just been restored from a backup, I've executed ANALYZE for both tables.\n\nThe output of explain analyze (Postgres 9.2.3):\n\nHash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual time=1982.543..2737.331 rows=41333 loops=1)\n Hash Cond: ((uc.user_id)::text = (u.id)::text)\n -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396 width=33) (actual time=0.019..89.691 rows=41333 loops=1)\n Filter: ((competition_id)::text = '3cc1cb9b3ac132ad013ad01316040001'::text)\n Rows Removed by Filter: 80684\n -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual time=1977.604..1977.604 rows=999673 loops=1)\n Buckets: 2048 Batches: 128 Memory Usage: 589kB\n -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673 width=42) (actual time=0.004..1178.827 rows=999673 loops=1)\nTotal runtime: 2740.723 ms\n\n\nI expected to see an index-scan on user_2_competition with a hash join to user, not a sequential scan on user. I've tried this with Postgres 9.1 and 9.2.3).\n\nAny ideas, what's going on here?\n\nWith EXPLAIN ANALYZE I can see, which query plan Postgres is using. Is there any way to find out, WHY postgres uses this query plan? \n\nbest regards\nDieter\n\n\n----------------------------------------------------\n\nThe full table schema:\n\nCREATE TABLE user_2_competition\n(\nid varchar(32) NOT NULL,\nversion int4 NOT NULL DEFAULT 0,\nconditions_confirm_ip varchar(30),\ncreated_date timestamp NOT NULL DEFAULT now(),\ndeleted bool NOT NULL DEFAULT false,\nlast_visit timestamp,\nresort_id int4,\nrole varchar(255),\ncaid int4 NOT NULL,\nponr int4 NOT NULL,\nktka int4 NOT NULL,\nlfnr int4 NOT NULL,\ntotal_visits int8 NOT NULL DEFAULT 0,\nverified bool NOT NULL,\ncompetition_id varchar(32),\nuser_id varchar(32),\ncompetition_terms int4 NOT NULL DEFAULT (-1),\ndisqualified bool NOT NULL DEFAULT false,\nregistration_key_id int4,\n\nPRIMARY KEY(id)\n);\n\n-- Indexes ------------------------------------------------------------\nCREATE INDEX IDX_USER_ID ON user_2_competition USING btree (user_id);\nCREATE INDEX idx_user_2_competition_competition ON user_2_competition USING btree (competition_id);\nCREATE UNIQUE INDEX user_2_competition_user_id_competition_id_key ON user_2_competition USING btree (user_id, competition_id);\n\n-- Foreign key constraints -------------------------------------------\nALTER TABLE user_2_competition\n ADD CONSTRAINT fk_user_competition_competition_group\n FOREIGN KEY (competition_id) REFERENCES competition (id) ON DELETE CASCADE;\nALTER TABLE user_2_competition\n ADD CONSTRAINT fk_user_2_competition_registration_key\n FOREIGN KEY (registration_key_id) REFERENCES competition_registration_key (id);\nALTER TABLE user_2_competition\n ADD CONSTRAINT fk_user_competition_terms\n FOREIGN KEY (competition_terms) REFERENCES competition_terms (id);\nALTER TABLE user_2_competition\n ADD CONSTRAINT fk_user_competition_user\n FOREIGN KEY (user_id) REFERENCES user (id) ON DELETE CASCADE;\n\n-----------------\n\n\nCREATE TABLE competition\n(\nid varchar(32) NOT NULL,\nversion int4 NOT NULL DEFAULT 0,\ncreated_by varchar(255),\ncreated_date timestamp,\nmodified_by varchar(255),\nmodified_date timestamp,\ndeleted bool NOT NULL DEFAULT false,\nactive bool NOT NULL DEFAULT false,\naverage_score float8,\nstart_time timestamp NOT NULL,\nend_time timestamp NOT NULL,\ninfo_layout varchar(200),\nlist_layout varchar(200),\nlead_action varchar(100),\nranking_layout varchar(200),\nexternal_url varchar(255),\nforum_enabled bool NOT NULL DEFAULT false,\nhas_ski_movies bool NOT NULL DEFAULT false,\nlink_name varchar(50) NOT NULL,\nparticipation_type varchar(255) NOT NULL,\nsponsor varchar(100),\ncustom_style bool NOT NULL DEFAULT true,\nbg_color varchar(7),\ntab_style varchar(20),\nbackground_image_preview_upload_date timestamp,\nbackground_image_upload_date timestamp,\nsponsor_logo_upload_date timestamp,\nname int4 NOT NULL,\nshort_name int4 NOT NULL,\ndescription int4 NOT NULL,\nteaser int4 NOT NULL,\ntags varchar(1000),\nlogo_resort_id int4,\nvisible bool NOT NULL DEFAULT true,\ntime_zone_id varchar(32) NOT NULL DEFAULT 'Europe/Vienna'::character varying,\ncss_styles varchar(2000),\nteaser_popup int4 NOT NULL DEFAULT (-1),\nwinner_tab int4 NOT NULL DEFAULT (-1),\nreminder_email int4 NOT NULL DEFAULT (-1),\nreminder_email_subject int4 NOT NULL DEFAULT (-1),\npriority int4 NOT NULL DEFAULT 5,\ninstance_selector_class_name varchar(200),\nexternal_sponsor_logo_upload_date timestamp,\ncustomer_id varchar(10),\nrestricted_registration bool NOT NULL DEFAULT false,\n\nPRIMARY KEY(id)\n);\n\n-- Indexes ------------------------------------------------------------\nCREATE UNIQUE INDEX idx_competition_link_name ON competition USING btree (link_name);\n\n-- Foreign key constraints -------------------------------------------\nALTER TABLE competition\n ADD CONSTRAINT fk_competition_description\n FOREIGN KEY (description) REFERENCES localized_text (id);\nALTER TABLE competition\n ADD CONSTRAINT fk_competition_name\n FOREIGN KEY (name) REFERENCES localized_text (id);\nALTER TABLE competition\n ADD CONSTRAINT fk_competition_reminder_email\n FOREIGN KEY (reminder_email) REFERENCES localized_text (id);\nALTER TABLE competition\n ADD CONSTRAINT fk_competition_reminder_subject\n FOREIGN KEY (reminder_email_subject) REFERENCES localized_text (id);\nALTER TABLE competition\n ADD CONSTRAINT fk_competition_short_name\n FOREIGN KEY (short_name) REFERENCES localized_text (id);\nALTER TABLE competition\n ADD CONSTRAINT fk_competition_teaser\n FOREIGN KEY (teaser) REFERENCES localized_text (id);\nALTER TABLE competition\n ADD CONSTRAINT fk_competition_teaser_popup\n FOREIGN KEY (teaser_popup) REFERENCES localized_text (id);\nALTER TABLE competition\n ADD CONSTRAINT fk_competition_winner_tab\n FOREIGN KEY (winner_tab) REFERENCES localized_text (id);\n\n\n\nCREATE TABLE user\n(\nid varchar(32) NOT NULL,\nversion int4 NOT NULL DEFAULT 0,\ndeleted bool NOT NULL DEFAULT false,\nabout_me varchar(8000),\nbirth_date date,\ncommunicated_to_ticket_corner timestamp,\nconditions_confirm_date timestamp,\nemail varchar(125) NOT NULL,\nfname varchar(50) NOT NULL,\ngender varchar(10),\nlname varchar(50) NOT NULL,\nold_skiline_id int4,\nphoto_upload_date timestamp,\nnews_letter bool NOT NULL DEFAULT true,\nnewsfeed_notification varchar(20),\npreferred_language varchar(16),\nprivacy_address varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,\nprivacy_basic_data varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,\nprivacy_community_accounts varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,\nprivacy_email varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,\nprivacy_fitness_profile varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,\nprivacy_phone_numbers varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,\nprivacy_race_profile varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,\nprivacy_rankings_user_name varchar(10) DEFAULT 'NO_DISPLAY'::character varying,\nsearch_email varchar(125),\nsearch_name varchar(110),\nstatus_points int4 NOT NULL DEFAULT 0,\nticket_corner_id int4,\nuser_name varchar(50) NOT NULL,\nuser_name_deleted varchar(50),\naddress varchar(32) NOT NULL,\ncurrent_fitness_profile varchar(32),\nrace_profile varchar(32) NOT NULL,\ncustom1 varchar(255),\ncustom2 varchar(255),\ncustom3 varchar(255),\nmagento_customer_id int4,\ncreated_by varchar(255),\ncreated_date timestamp,\nmodified_by varchar(255),\nmodified_date timestamp,\nnewsfeed varchar(32),\nbirth_day int4,\nestimated_gender varchar(10),\ncurrent_season_statistics int4 NOT NULL DEFAULT (-1),\nstatistic_competition_count int4 NOT NULL DEFAULT 0,\nstatistic_friend_count int4 NOT NULL DEFAULT 0,\nstatistic_group_count int4 NOT NULL DEFAULT 0,\nstatistic_skimovie_count_friends int4 NOT NULL DEFAULT 0,\nstatistic_skimovie_count_public int4 NOT NULL DEFAULT 0,\nstatistic_skimovie_count_all int4 NOT NULL DEFAULT 0,\nstatistic_photo_count_public int4 NOT NULL DEFAULT 0,\nstatistic_photo_count_friends int4 NOT NULL DEFAULT 0,\nstatistic_photo_count_all int4 NOT NULL DEFAULT 0,\nprivacy_calendar varchar(10) DEFAULT 'FRIENDS'::character varying,\nsecurity_info_id varchar(32),\nstatistic_skiing_days int4 NOT NULL DEFAULT 0,\nstatistic_vertical_meters int4 NOT NULL DEFAULT 0,\nconditions_confirm_ip varchar(30),\ndoi_click_ip varchar(30),\nstaff bool,\norigin varchar(32),\ndisqualified bool,\nstatistic_badge_count int4 NOT NULL DEFAULT 0,\ntime_zone_id varchar(32),\nold_email varchar(125),\nhandicap float4,\nprevious_handicap float4,\nhandicap_calculation_time timestamp,\nlast_skiing_day date,\nadmin_disqualification bool,\nadmin_disqualification_top100 bool,\n\nPRIMARY KEY(id)\n);\n\n-- Indexes ------------------------------------------------------------\nCREATE INDEX idx_user_birthdate ON user USING btree (birth_day);\nCREATE INDEX idx_user_created_date ON user USING btree (created_date);\nCREATE INDEX idx_user_email ON user USING btree (email);\nCREATE INDEX idx_user_magento_customer_id ON user USING btree (magento_customer_id);\nCREATE INDEX idx_usr_modified_date ON user USING btree (modified_date);\nCREATE UNIQUE INDEX user_address_key ON user USING btree (address);\nCREATE UNIQUE INDEX user_race_profile_key ON user USING btree (race_profile);\nCREATE UNIQUE INDEX user_ticket_corner_id_key ON user USING btree (ticket_corner_id);\nCREATE UNIQUE INDEX user_user_name_key ON user USING btree (user_name);\n\n-- Foreign key constraints -------------------------------------------\nALTER TABLE user\n ADD CONSTRAINT fk_user_adress\n FOREIGN KEY (address) REFERENCES address (id) ON DELETE CASCADE;\nALTER TABLE user\n ADD CONSTRAINT fk36ebcbd93f2254\n FOREIGN KEY (current_fitness_profile) REFERENCES fitness_profile (id);\nALTER TABLE user\n ADD CONSTRAINT fk_user_newsfeed\n FOREIGN KEY (newsfeed) REFERENCES newsfeed (id);\nALTER TABLE user\n ADD CONSTRAINT fk36ebcbd70f10c\n FOREIGN KEY (race_profile) REFERENCES race_profile (id);\nALTER TABLE user\n ADD CONSTRAINT fk_user_sec_info\n FOREIGN KEY (security_info_id) REFERENCES security_info (id) ON DELETE CASCADE;\nALTER TABLE user\n ADD CONSTRAINT fk_user_statistics_current_season\n FOREIGN KEY (current_season_statistics) REFERENCES user_season_statistics (id);\n\n\n\n\nHi everybody,in a project I have a performance problem, which I (and my colleagues) don't understand. It's a simple join between 2 of 3 tables:table-1: user (id, user_name, ...). This table has about 1 million rows (999673 rows)table-2: competition (57 rows)table-3: user_2_competition. A relation between user and competition. This table has about 100.000 rowsThe query is a join between table user_2_competition and user and looks like this:select u.id, u.user_namefrom user_2_competition uc left join \"user\" u on u.id = uc.user_id where uc.competition_id = '3cc1cb9b3ac132ad013ad01316040001'The query returns the ID and user_name of all users participating in a competition.What I don't understand: This query executes a sequential scan on user!The tables have the following indexes:user_2_competition: there is an index on user_id and an index on competition_id (competition_id is a VARCHAR(32) containing UUIDs)user: id is the primary key and has therefore a unique index (the ID is a VARCHAR(32), which contains UUIDs).The database has just been restored from a backup, I've executed ANALYZE for both tables.The output of explain analyze (Postgres 9.2.3):Hash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual time=1982.543..2737.331 rows=41333 loops=1) Hash Cond: ((uc.user_id)::text = (u.id)::text) -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396 width=33) (actual time=0.019..89.691 rows=41333 loops=1) Filter: ((competition_id)::text = '3cc1cb9b3ac132ad013ad01316040001'::text) Rows Removed by Filter: 80684 -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual time=1977.604..1977.604 rows=999673 loops=1) Buckets: 2048 Batches: 128 Memory Usage: 589kB -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673 width=42) (actual time=0.004..1178.827 rows=999673 loops=1)Total runtime: 2740.723 msI expected to see an index-scan on user_2_competition with a hash join to user, not a sequential scan on user. I've tried this with Postgres 9.1 and 9.2.3).Any ideas, what's going on here?With EXPLAIN ANALYZE I can see, which query plan Postgres is using. Is there any way to find out, WHY postgres uses this query plan? best regardsDieter----------------------------------------------------The full table schema:CREATE TABLE user_2_competition(id varchar(32) NOT NULL,version int4 NOT NULL DEFAULT 0,conditions_confirm_ip varchar(30),created_date timestamp NOT NULL DEFAULT now(),deleted bool NOT NULL DEFAULT false,last_visit timestamp,resort_id int4,role varchar(255),caid int4 NOT NULL,ponr int4 NOT NULL,ktka int4 NOT NULL,lfnr int4 NOT NULL,total_visits int8 NOT NULL DEFAULT 0,verified bool NOT NULL,competition_id varchar(32),user_id varchar(32),competition_terms int4 NOT NULL DEFAULT (-1),disqualified bool NOT NULL DEFAULT false,registration_key_id int4,PRIMARY KEY(id));-- Indexes ------------------------------------------------------------CREATE INDEX IDX_USER_ID ON user_2_competition USING btree (user_id);CREATE INDEX idx_user_2_competition_competition ON user_2_competition USING btree (competition_id);CREATE UNIQUE INDEX user_2_competition_user_id_competition_id_key ON user_2_competition USING btree (user_id, competition_id);-- Foreign key constraints -------------------------------------------ALTER TABLE user_2_competition ADD CONSTRAINT fk_user_competition_competition_group FOREIGN KEY (competition_id) REFERENCES competition (id) ON DELETE CASCADE;ALTER TABLE user_2_competition ADD CONSTRAINT fk_user_2_competition_registration_key FOREIGN KEY (registration_key_id) REFERENCES competition_registration_key (id);ALTER TABLE user_2_competition ADD CONSTRAINT fk_user_competition_terms FOREIGN KEY (competition_terms) REFERENCES competition_terms (id);ALTER TABLE user_2_competition ADD CONSTRAINT fk_user_competition_user FOREIGN KEY (user_id) REFERENCES user (id) ON DELETE CASCADE;-----------------CREATE TABLE competition(id varchar(32) NOT NULL,version int4 NOT NULL DEFAULT 0,created_by varchar(255),created_date timestamp,modified_by varchar(255),modified_date timestamp,deleted bool NOT NULL DEFAULT false,active bool NOT NULL DEFAULT false,average_score float8,start_time timestamp NOT NULL,end_time timestamp NOT NULL,info_layout varchar(200),list_layout varchar(200),lead_action varchar(100),ranking_layout varchar(200),external_url varchar(255),forum_enabled bool NOT NULL DEFAULT false,has_ski_movies bool NOT NULL DEFAULT false,link_name varchar(50) NOT NULL,participation_type varchar(255) NOT NULL,sponsor varchar(100),custom_style bool NOT NULL DEFAULT true,bg_color varchar(7),tab_style varchar(20),background_image_preview_upload_date timestamp,background_image_upload_date timestamp,sponsor_logo_upload_date timestamp,name int4 NOT NULL,short_name int4 NOT NULL,description int4 NOT NULL,teaser int4 NOT NULL,tags varchar(1000),logo_resort_id int4,visible bool NOT NULL DEFAULT true,time_zone_id varchar(32) NOT NULL DEFAULT 'Europe/Vienna'::character varying,css_styles varchar(2000),teaser_popup int4 NOT NULL DEFAULT (-1),winner_tab int4 NOT NULL DEFAULT (-1),reminder_email int4 NOT NULL DEFAULT (-1),reminder_email_subject int4 NOT NULL DEFAULT (-1),priority int4 NOT NULL DEFAULT 5,instance_selector_class_name varchar(200),external_sponsor_logo_upload_date timestamp,customer_id varchar(10),restricted_registration bool NOT NULL DEFAULT false,PRIMARY KEY(id));-- Indexes ------------------------------------------------------------CREATE UNIQUE INDEX idx_competition_link_name ON competition USING btree (link_name);-- Foreign key constraints -------------------------------------------ALTER TABLE competition ADD CONSTRAINT fk_competition_description FOREIGN KEY (description) REFERENCES localized_text (id);ALTER TABLE competition ADD CONSTRAINT fk_competition_name FOREIGN KEY (name) REFERENCES localized_text (id);ALTER TABLE competition ADD CONSTRAINT fk_competition_reminder_email FOREIGN KEY (reminder_email) REFERENCES localized_text (id);ALTER TABLE competition ADD CONSTRAINT fk_competition_reminder_subject FOREIGN KEY (reminder_email_subject) REFERENCES localized_text (id);ALTER TABLE competition ADD CONSTRAINT fk_competition_short_name FOREIGN KEY (short_name) REFERENCES localized_text (id);ALTER TABLE competition ADD CONSTRAINT fk_competition_teaser FOREIGN KEY (teaser) REFERENCES localized_text (id);ALTER TABLE competition ADD CONSTRAINT fk_competition_teaser_popup FOREIGN KEY (teaser_popup) REFERENCES localized_text (id);ALTER TABLE competition ADD CONSTRAINT fk_competition_winner_tab FOREIGN KEY (winner_tab) REFERENCES localized_text (id);CREATE TABLE user(id varchar(32) NOT NULL,version int4 NOT NULL DEFAULT 0,deleted bool NOT NULL DEFAULT false,about_me varchar(8000),birth_date date,communicated_to_ticket_corner timestamp,conditions_confirm_date timestamp,email varchar(125) NOT NULL,fname varchar(50) NOT NULL,gender varchar(10),lname varchar(50) NOT NULL,old_skiline_id int4,photo_upload_date timestamp,news_letter bool NOT NULL DEFAULT true,newsfeed_notification varchar(20),preferred_language varchar(16),privacy_address varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,privacy_basic_data varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,privacy_community_accounts varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,privacy_email varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,privacy_fitness_profile varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,privacy_phone_numbers varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,privacy_race_profile varchar(10) NOT NULL DEFAULT 'FRIENDS'::character varying,privacy_rankings_user_name varchar(10) DEFAULT 'NO_DISPLAY'::character varying,search_email varchar(125),search_name varchar(110),status_points int4 NOT NULL DEFAULT 0,ticket_corner_id int4,user_name varchar(50) NOT NULL,user_name_deleted varchar(50),address varchar(32) NOT NULL,current_fitness_profile varchar(32),race_profile varchar(32) NOT NULL,custom1 varchar(255),custom2 varchar(255),custom3 varchar(255),magento_customer_id int4,created_by varchar(255),created_date timestamp,modified_by varchar(255),modified_date timestamp,newsfeed varchar(32),birth_day int4,estimated_gender varchar(10),current_season_statistics int4 NOT NULL DEFAULT (-1),statistic_competition_count int4 NOT NULL DEFAULT 0,statistic_friend_count int4 NOT NULL DEFAULT 0,statistic_group_count int4 NOT NULL DEFAULT 0,statistic_skimovie_count_friends int4 NOT NULL DEFAULT 0,statistic_skimovie_count_public int4 NOT NULL DEFAULT 0,statistic_skimovie_count_all int4 NOT NULL DEFAULT 0,statistic_photo_count_public int4 NOT NULL DEFAULT 0,statistic_photo_count_friends int4 NOT NULL DEFAULT 0,statistic_photo_count_all int4 NOT NULL DEFAULT 0,privacy_calendar varchar(10) DEFAULT 'FRIENDS'::character varying,security_info_id varchar(32),statistic_skiing_days int4 NOT NULL DEFAULT 0,statistic_vertical_meters int4 NOT NULL DEFAULT 0,conditions_confirm_ip varchar(30),doi_click_ip varchar(30),staff bool,origin varchar(32),disqualified bool,statistic_badge_count int4 NOT NULL DEFAULT 0,time_zone_id varchar(32),old_email varchar(125),handicap float4,previous_handicap float4,handicap_calculation_time timestamp,last_skiing_day date,admin_disqualification bool,admin_disqualification_top100 bool,PRIMARY KEY(id));-- Indexes ------------------------------------------------------------CREATE INDEX idx_user_birthdate ON user USING btree (birth_day);CREATE INDEX idx_user_created_date ON user USING btree (created_date);CREATE INDEX idx_user_email ON user USING btree (email);CREATE INDEX idx_user_magento_customer_id ON user USING btree (magento_customer_id);CREATE INDEX idx_usr_modified_date ON user USING btree (modified_date);CREATE UNIQUE INDEX user_address_key ON user USING btree (address);CREATE UNIQUE INDEX user_race_profile_key ON user USING btree (race_profile);CREATE UNIQUE INDEX user_ticket_corner_id_key ON user USING btree (ticket_corner_id);CREATE UNIQUE INDEX user_user_name_key ON user USING btree (user_name);-- Foreign key constraints -------------------------------------------ALTER TABLE user ADD CONSTRAINT fk_user_adress FOREIGN KEY (address) REFERENCES address (id) ON DELETE CASCADE;ALTER TABLE user ADD CONSTRAINT fk36ebcbd93f2254 FOREIGN KEY (current_fitness_profile) REFERENCES fitness_profile (id);ALTER TABLE user ADD CONSTRAINT fk_user_newsfeed FOREIGN KEY (newsfeed) REFERENCES newsfeed (id);ALTER TABLE user ADD CONSTRAINT fk36ebcbd70f10c FOREIGN KEY (race_profile) REFERENCES race_profile (id);ALTER TABLE user ADD CONSTRAINT fk_user_sec_info FOREIGN KEY (security_info_id) REFERENCES security_info (id) ON DELETE CASCADE;ALTER TABLE user ADD CONSTRAINT fk_user_statistics_current_season FOREIGN KEY (current_season_statistics) REFERENCES user_season_statistics (id);",
"msg_date": "Tue, 2 Apr 2013 10:52:01 +0200",
"msg_from": "Dieter Rehbein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Join between 2 tables always executes a sequential scan on the larger\n\ttable"
},
{
"msg_contents": "From: Dieter Rehbein [mailto:[email protected]] \nSent: Tuesday, April 02, 2013 4:52 AM\nTo: [email protected]\nSubject: Join between 2 tables always executes a sequential scan on the larger table\n\nHi everybody,\n\nin a project I have a performance problem, which I (and my colleagues) don't understand. It's a simple join between 2 of 3 tables:\n\ntable-1: user (id, user_name, ...). This table has about 1 million rows (999673 rows)\ntable-2: competition (57 rows)\ntable-3: user_2_competition. A relation between user and competition. This table has about 100.000 rows\n\nThe query is a join between table user_2_competition and user and looks like this:\n\nselect u.id, u.user_name\nfrom user_2_competition uc \n left join \"user\" u on u.id = uc.user_id \nwhere uc.competition_id = '3cc1cb9b3ac132ad013ad01316040001'\n\nThe query returns the ID and user_name of all users participating in a competition.\n\nWhat I don't understand: This query executes a sequential scan on user!\n\n\nThe tables have the following indexes:\n\nuser_2_competition: there is an index on user_id and an index on competition_id (competition_id is a VARCHAR(32) containing UUIDs)\nuser: id is the primary key and has therefore a unique index (the ID is a VARCHAR(32), which contains UUIDs).\n\nThe database has just been restored from a backup, I've executed ANALYZE for both tables.\n\nThe output of explain analyze (Postgres 9.2.3):\n\nHash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual time=1982.543..2737.331 rows=41333 loops=1)\n Hash Cond: ((uc.user_id)::text = (u.id)::text)\n -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396 width=33) (actual time=0.019..89.691 rows=41333 loops=1)\n Filter: ((competition_id)::text = '3cc1cb9b3ac132ad013ad01316040001'::text)\n Rows Removed by Filter: 80684\n -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual time=1977.604..1977.604 rows=999673 loops=1)\n Buckets: 2048 Batches: 128 Memory Usage: 589kB\n -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673 width=42) (actual time=0.004..1178.827 rows=999673 loops=1)\nTotal runtime: 2740.723 ms\n\n\nI expected to see an index-scan on user_2_competition with a hash join to user, not a sequential scan on user. I've tried this with Postgres 9.1 and 9.2.3).\n\nAny ideas, what's going on here?\n\nWith EXPLAIN ANALYZE I can see, which query plan Postgres is using. Is there any way to find out, WHY postgres uses this query plan? \n\nbest regards\nDieter\n\n-----------------------------------------------\n\nDieter, \nwhy do you think index-scan on user_2_competition would be better?\n\nBased on huge number of rows returned (41333 out of total ~120000 in the table) from this table optimizer decided that Seq Scan is better than index scan.\nYou don't show QUERY TUNING parameters from Postgresql.conf, are they default?\nPlaying with optimizer parameters (lowering random_page_cost, lowering cpu_index_tuple_cost , increasing effective_cache_size, or just setting enable_seqscan = off), you could try to force \"optimizer\" to use index, and see if you are getting better results.\n\nRegards,\nIgor Neyman\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 2 Apr 2013 14:55:55 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join between 2 tables always executes a sequential scan on the\n\tlarger table"
},
{
"msg_contents": "Igor Neyman <[email protected]> writes:\n> The output of explain analyze (Postgres 9.2.3):\n\n> Hash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual time=1982.543..2737.331 rows=41333 loops=1)\n> Hash Cond: ((uc.user_id)::text = (u.id)::text)\n> -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396 width=33) (actual time=0.019..89.691 rows=41333 loops=1)\n> Filter: ((competition_id)::text = '3cc1cb9b3ac132ad013ad01316040001'::text)\n> Rows Removed by Filter: 80684\n> -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual time=1977.604..1977.604 rows=999673 loops=1)\n> Buckets: 2048 Batches: 128 Memory Usage: 589kB\n> -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673 width=42) (actual time=0.004..1178.827 rows=999673 loops=1)\n> Total runtime: 2740.723 ms\n\n\n> I expected to see an index-scan on user_2_competition with a hash join to user, not a sequential scan on user. I've tried this with Postgres 9.1 and 9.2.3).\n\nAccording to the numbers, you're fetching about a third of the\nuser_2_competition table, which is well past the point where an\nindexscan is of any use. It's picking the seqscan because it thinks\nthat's faster, and I'm sure it's right.\n\nThe aspect of this plan that actually seems a bit dubious is that it's\nhashing the larger input table rather than the smaller one. There's\na thread going on about that in -hackers right now; we think it's\nprobably putting too much emphasis on the distribution of the join key\nas opposed to the size of the table.\n\nOne thing that would help is increasing work_mem --- it looks like you\nare using the default 1MB. Cranking that up to a few MB would reduce\nthe number of hash batches needed.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 02 Apr 2013 11:45:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Join between 2 tables always executes a sequential scan on\n\tthe larger table"
},
{
"msg_contents": "Hi Igor,\n\nthanks for the reply. The sequential scan on user_2_competition wasn't my main-problem. What really suprised me was the sequential scan on table user, which is a sequential scan over one million rows.\n\nHash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual time=1982.543..2737.331 rows=41333 loops=1)\n Hash Cond: ((uc.user_id)::text = (u.id)::text)\n -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396 width=33) (actual time=0.019..89.691 rows=41333 loops=1)\n Filter: ((competition_id)::text = '3cc1cb9b3ac132ad013ad01316040001'::text)\n Rows Removed by Filter: 80684\n -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual time=1977.604..1977.604 rows=999673 loops=1)\n Buckets: 2048 Batches: 128 Memory Usage: 589kB\n -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673 width=42) (actual time=0.004..1178.827 rows=999673 loops=1) <-- This sequential scan is strange.\n\n\nIMHO the reason for the sequential scan on user is, that it is faster than an index-scan for 41333 rows. I've tried the same query using a different competition id with much less participants (about 1700). That query has a query plan as expected:\n\nNested Loop Left Join (cost=0.00..21385.59 rows=1684 width=42) (actual time=1.317..147.781 rows=1757 loops=1)\n -> Seq Scan on user_2_competition uc (cost=0.00..7026.93 rows=1684 width=33) (actual time=1.262..92.339 rows=1757 loops=1)\n Filter: ((competition_id)::text = '3cc1cb963b988f12013bc737b4590001'::text)\n -> Index Scan using user_pkey on \"user\" u (cost=0.00..8.51 rows=1 width=42) (actual time=0.030..0.031 rows=1 loops=1757)\n Index Cond: ((id)::text = (uc.user_id)::text)\nTotal runtime: 148.068 ms\n\n\nregards\nDieter\n\n\n\nAm 02.04.2013 um 16:55 schrieb Igor Neyman <[email protected]>:\n\nFrom: Dieter Rehbein [mailto:[email protected]] \nSent: Tuesday, April 02, 2013 4:52 AM\nTo: [email protected]\nSubject: Join between 2 tables always executes a sequential scan on the larger table\n\nHi everybody,\n\nin a project I have a performance problem, which I (and my colleagues) don't understand. It's a simple join between 2 of 3 tables:\n\ntable-1: user (id, user_name, ...). This table has about 1 million rows (999673 rows)\ntable-2: competition (57 rows)\ntable-3: user_2_competition. A relation between user and competition. This table has about 100.000 rows\n\nThe query is a join between table user_2_competition and user and looks like this:\n\nselect u.id, u.user_name\nfrom user_2_competition uc \n left join \"user\" u on u.id = uc.user_id \nwhere uc.competition_id = '3cc1cb9b3ac132ad013ad01316040001'\n\nThe query returns the ID and user_name of all users participating in a competition.\n\nWhat I don't understand: This query executes a sequential scan on user!\n\n\nThe tables have the following indexes:\n\nuser_2_competition: there is an index on user_id and an index on competition_id (competition_id is a VARCHAR(32) containing UUIDs)\nuser: id is the primary key and has therefore a unique index (the ID is a VARCHAR(32), which contains UUIDs).\n\nThe database has just been restored from a backup, I've executed ANALYZE for both tables.\n\nThe output of explain analyze (Postgres 9.2.3):\n\nHash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual time=1982.543..2737.331 rows=41333 loops=1)\n Hash Cond: ((uc.user_id)::text = (u.id)::text)\n -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396 width=33) (actual time=0.019..89.691 rows=41333 loops=1)\n Filter: ((competition_id)::text = '3cc1cb9b3ac132ad013ad01316040001'::text)\n Rows Removed by Filter: 80684\n -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual time=1977.604..1977.604 rows=999673 loops=1)\n Buckets: 2048 Batches: 128 Memory Usage: 589kB\n -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673 width=42) (actual time=0.004..1178.827 rows=999673 loops=1)\nTotal runtime: 2740.723 ms\n\n\nI expected to see an index-scan on user_2_competition with a hash join to user, not a sequential scan on user. I've tried this with Postgres 9.1 and 9.2.3).\n\nAny ideas, what's going on here?\n\nWith EXPLAIN ANALYZE I can see, which query plan Postgres is using. Is there any way to find out, WHY postgres uses this query plan? \n\nbest regards\nDieter\n\n-----------------------------------------------\n\nDieter, \nwhy do you think index-scan on user_2_competition would be better?\n\nBased on huge number of rows returned (41333 out of total ~120000 in the table) from this table optimizer decided that Seq Scan is better than index scan.\nYou don't show QUERY TUNING parameters from Postgresql.conf, are they default?\nPlaying with optimizer parameters (lowering random_page_cost, lowering cpu_index_tuple_cost , increasing effective_cache_size, or just setting enable_seqscan = off), you could try to force \"optimizer\" to use index, and see if you are getting better results.\n\nRegards,\nIgor Neyman\n\nHappy Skiing!\n\nDieter Rehbein\nSoftware Architect | [email protected] \n\nSkiline Media GmbH\nLakeside B03\n9020 Klagenfurt, Austria\n\nfon: +43 463 249445-800\nfax: +43 463 249445-102\n\n\"Erlebe Skifahren neu!\"\n\nCONFIDENTIALITY: This e-mail and any attachments are confidential and may also be privileged. If you are not the designated recipient, please notify the sender immediately by reply e-mail and destroy all copies (digital and paper). Any unauthorized disclosure, distribution, copying, storage or use of the information contained in this e-mail or any attachments is strictly prohibited and may be unlawful.\nLEGAL: Skiline Media GmbH - Managing Director: Michael Saringer\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Apr 2013 10:18:55 +0200",
"msg_from": "Dieter Rehbein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join between 2 tables always executes a sequential scan on the\n\tlarger table"
},
{
"msg_contents": "HiTom,\n\nthanks for your reply. It was the sequential scan on table user (about 1 million rows), which really surprised me. But a sequential scan over 1 million users seems to be more efficient than an index-Scan for 41.000 rows.\n\nIf a execute the query with the ID of a competiton with less participants, the query has a plan as expected:\n\nNested Loop Left Join (cost=0.00..21385.72 rows=1684 width=42) (actual time=1.300..138.223 rows=1757 loops=1)\n -> Seq Scan on user_2_competition uc (cost=0.00..7026.93 rows=1684 width=33) (actual time=1.253..90.846 rows=1757 loops=1)\n Filter: ((competition_id)::text = '3cc1cb963b988f12013bc737b4590001'::text)\n -> Index Scan using user_pkey on \"user\" u (cost=0.00..8.51 rows=1 width=42) (actual time=0.026..0.026 rows=1 loops=1757)\n Index Cond: ((id)::text = (uc.user_id)::text)\nTotal runtime: 138.505 ms\n\n\nregards\nDieter\n\n\n\nAm 02.04.2013 um 17:45 schrieb Tom Lane <[email protected]>:\n\nIgor Neyman <[email protected]> writes:\n> The output of explain analyze (Postgres 9.2.3):\n\n> Hash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual time=1982.543..2737.331 rows=41333 loops=1)\n> Hash Cond: ((uc.user_id)::text = (u.id)::text)\n> -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396 width=33) (actual time=0.019..89.691 rows=41333 loops=1)\n> Filter: ((competition_id)::text = '3cc1cb9b3ac132ad013ad01316040001'::text)\n> Rows Removed by Filter: 80684\n> -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual time=1977.604..1977.604 rows=999673 loops=1)\n> Buckets: 2048 Batches: 128 Memory Usage: 589kB\n> -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673 width=42) (actual time=0.004..1178.827 rows=999673 loops=1)\n> Total runtime: 2740.723 ms\n\n\n> I expected to see an index-scan on user_2_competition with a hash join to user, not a sequential scan on user. I've tried this with Postgres 9.1 and 9.2.3).\n\nAccording to the numbers, you're fetching about a third of the\nuser_2_competition table, which is well past the point where an\nindexscan is of any use. It's picking the seqscan because it thinks\nthat's faster, and I'm sure it's right.\n\nThe aspect of this plan that actually seems a bit dubious is that it's\nhashing the larger input table rather than the smaller one. There's\na thread going on about that in -hackers right now; we think it's\nprobably putting too much emphasis on the distribution of the join key\nas opposed to the size of the table.\n\nOne thing that would help is increasing work_mem --- it looks like you\nare using the default 1MB. Cranking that up to a few MB would reduce\nthe number of hash batches needed.\n\n\t\t\tregards, tom lane\n\nHappy Skiing!\n\nDieter Rehbein\nSoftware Architect | [email protected] \n\nSkiline Media GmbH\nLakeside B03\n9020 Klagenfurt, Austria\n\nfon: +43 463 249445-800\nfax: +43 463 249445-102\n\n\"Erlebe Skifahren neu!\"\n\nCONFIDENTIALITY: This e-mail and any attachments are confidential and may also be privileged. If you are not the designated recipient, please notify the sender immediately by reply e-mail and destroy all copies (digital and paper). Any unauthorized disclosure, distribution, copying, storage or use of the information contained in this e-mail or any attachments is strictly prohibited and may be unlawful.\nLEGAL: Skiline Media GmbH - Managing Director: Michael Saringer\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Apr 2013 10:24:54 +0200",
"msg_from": "Dieter Rehbein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join between 2 tables always executes a sequential scan on the\n\tlarger table"
},
{
"msg_contents": "Hello Dieter,\nIf you are asking more than about 20% of the rows the optimizer will choose\nto do a seq scan and actually that's the right thing to do. On the second\nexample of yours the rows here less and that's why it chose to go with the\nindex.\nyou can force an index scan by changing the optimizer parameters as other\nguys already mentioned\n\n\nVasilis Ventirozos\n\nOn Wed, Apr 3, 2013 at 11:18 AM, Dieter Rehbein\n<[email protected]>wrote:\n\n> Hi Igor,\n>\n> thanks for the reply. The sequential scan on user_2_competition wasn't my\n> main-problem. What really suprised me was the sequential scan on table\n> user, which is a sequential scan over one million rows.\n>\n> Hash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual\n> time=1982.543..2737.331 rows=41333 loops=1)\n> Hash Cond: ((uc.user_id)::text = (u.id)::text)\n> -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396\n> width=33) (actual time=0.019..89.691 rows=41333 loops=1)\n> Filter: ((competition_id)::text =\n> '3cc1cb9b3ac132ad013ad01316040001'::text)\n> Rows Removed by Filter: 80684\n> -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual\n> time=1977.604..1977.604 rows=999673 loops=1)\n> Buckets: 2048 Batches: 128 Memory Usage: 589kB\n> -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673\n> width=42) (actual time=0.004..1178.827 rows=999673 loops=1) <-- This\n> sequential scan is strange.\n>\n>\n> IMHO the reason for the sequential scan on user is, that it is faster than\n> an index-scan for 41333 rows. I've tried the same query using a different\n> competition id with much less participants (about 1700). That query has a\n> query plan as expected:\n>\n> Nested Loop Left Join (cost=0.00..21385.59 rows=1684 width=42) (actual\n> time=1.317..147.781 rows=1757 loops=1)\n> -> Seq Scan on user_2_competition uc (cost=0.00..7026.93 rows=1684\n> width=33) (actual time=1.262..92.339 rows=1757 loops=1)\n> Filter: ((competition_id)::text =\n> '3cc1cb963b988f12013bc737b4590001'::text)\n> -> Index Scan using user_pkey on \"user\" u (cost=0.00..8.51 rows=1\n> width=42) (actual time=0.030..0.031 rows=1 loops=1757)\n> Index Cond: ((id)::text = (uc.user_id)::text)\n> Total runtime: 148.068 ms\n>\n>\n> regards\n> Dieter\n>\n>\n>\n> Am 02.04.2013 um 16:55 schrieb Igor Neyman <[email protected]>:\n>\n> From: Dieter Rehbein [mailto:[email protected]]\n> Sent: Tuesday, April 02, 2013 4:52 AM\n> To: [email protected]\n> Subject: Join between 2 tables always executes a sequential scan on the\n> larger table\n>\n> Hi everybody,\n>\n> in a project I have a performance problem, which I (and my colleagues)\n> don't understand. It's a simple join between 2 of 3 tables:\n>\n> table-1: user (id, user_name, ...). This table has about 1 million\n> rows (999673 rows)\n> table-2: competition (57 rows)\n> table-3: user_2_competition. A relation between user and competition.\n> This table has about 100.000 rows\n>\n> The query is a join between table user_2_competition and user and looks\n> like this:\n>\n> select u.id, u.user_name\n> from user_2_competition uc\n> left join \"user\" u on u.id = uc.user_id\n> where uc.competition_id = '3cc1cb9b3ac132ad013ad01316040001'\n>\n> The query returns the ID and user_name of all users participating in a\n> competition.\n>\n> What I don't understand: This query executes a sequential scan on user!\n>\n>\n> The tables have the following indexes:\n>\n> user_2_competition: there is an index on user_id and an index on\n> competition_id (competition_id is a VARCHAR(32) containing UUIDs)\n> user: id is the primary key and has therefore a unique index (the ID is a\n> VARCHAR(32), which contains UUIDs).\n>\n> The database has just been restored from a backup, I've executed ANALYZE\n> for both tables.\n>\n> The output of explain analyze (Postgres 9.2.3):\n>\n> Hash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual\n> time=1982.543..2737.331 rows=41333 loops=1)\n> Hash Cond: ((uc.user_id)::text = (u.id)::text)\n> -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396\n> width=33) (actual time=0.019..89.691 rows=41333 loops=1)\n> Filter: ((competition_id)::text =\n> '3cc1cb9b3ac132ad013ad01316040001'::text)\n> Rows Removed by Filter: 80684\n> -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual\n> time=1977.604..1977.604 rows=999673 loops=1)\n> Buckets: 2048 Batches: 128 Memory Usage: 589kB\n> -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673\n> width=42) (actual time=0.004..1178.827 rows=999673 loops=1)\n> Total runtime: 2740.723 ms\n>\n>\n> I expected to see an index-scan on user_2_competition with a hash join to\n> user, not a sequential scan on user. I've tried this with Postgres 9.1 and\n> 9.2.3).\n>\n> Any ideas, what's going on here?\n>\n> With EXPLAIN ANALYZE I can see, which query plan Postgres is using. Is\n> there any way to find out, WHY postgres uses this query plan?\n>\n> best regards\n> Dieter\n>\n> -----------------------------------------------\n>\n> Dieter,\n> why do you think index-scan on user_2_competition would be better?\n>\n> Based on huge number of rows returned (41333 out of total ~120000 in the\n> table) from this table optimizer decided that Seq Scan is better than index\n> scan.\n> You don't show QUERY TUNING parameters from Postgresql.conf, are they\n> default?\n> Playing with optimizer parameters (lowering random_page_cost, lowering\n> cpu_index_tuple_cost , increasing effective_cache_size, or just setting\n> enable_seqscan = off), you could try to force \"optimizer\" to use index, and\n> see if you are getting better results.\n>\n> Regards,\n> Igor Neyman\n>\n> Happy Skiing!\n>\n> Dieter Rehbein\n> Software Architect | [email protected]\n>\n> Skiline Media GmbH\n> Lakeside B03\n> 9020 Klagenfurt, Austria\n>\n> fon: +43 463 249445-800\n> fax: +43 463 249445-102\n>\n> \"Erlebe Skifahren neu!\"\n>\n> CONFIDENTIALITY: This e-mail and any attachments are confidential and may\n> also be privileged. If you are not the designated recipient, please notify\n> the sender immediately by reply e-mail and destroy all copies (digital and\n> paper). Any unauthorized disclosure, distribution, copying, storage or use\n> of the information contained in this e-mail or any attachments is strictly\n> prohibited and may be unlawful.\n> LEGAL: Skiline Media GmbH - Managing Director: Michael Saringer\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHello Dieter,If you are asking more than about 20% of the rows the optimizer will choose to do a seq scan and actually that's the right thing to do. On the second example of yours the rows here less and that's why it chose to go with the index.\nyou can force an index scan by changing the optimizer parameters as other guys already mentionedVasilis Ventirozos\nOn Wed, Apr 3, 2013 at 11:18 AM, Dieter Rehbein <[email protected]> wrote:\n\nHi Igor,\n\nthanks for the reply. The sequential scan on user_2_competition wasn't my main-problem. What really suprised me was the sequential scan on table user, which is a sequential scan over one million rows.\n\nHash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual time=1982.543..2737.331 rows=41333 loops=1)\n Hash Cond: ((uc.user_id)::text = (u.id)::text)\n -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396 width=33) (actual time=0.019..89.691 rows=41333 loops=1)\n Filter: ((competition_id)::text = '3cc1cb9b3ac132ad013ad01316040001'::text)\n Rows Removed by Filter: 80684\n -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual time=1977.604..1977.604 rows=999673 loops=1)\n Buckets: 2048 Batches: 128 Memory Usage: 589kB\n -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673 width=42) (actual time=0.004..1178.827 rows=999673 loops=1) <-- This sequential scan is strange.\n\n\nIMHO the reason for the sequential scan on user is, that it is faster than an index-scan for 41333 rows. I've tried the same query using a different competition id with much less participants (about 1700). That query has a query plan as expected:\n\nNested Loop Left Join (cost=0.00..21385.59 rows=1684 width=42) (actual time=1.317..147.781 rows=1757 loops=1)\n -> Seq Scan on user_2_competition uc (cost=0.00..7026.93 rows=1684 width=33) (actual time=1.262..92.339 rows=1757 loops=1)\n Filter: ((competition_id)::text = '3cc1cb963b988f12013bc737b4590001'::text)\n -> Index Scan using user_pkey on \"user\" u (cost=0.00..8.51 rows=1 width=42) (actual time=0.030..0.031 rows=1 loops=1757)\n Index Cond: ((id)::text = (uc.user_id)::text)\nTotal runtime: 148.068 ms\n\n\nregards\nDieter\n\n\n\nAm 02.04.2013 um 16:55 schrieb Igor Neyman <[email protected]>:\n\nFrom: Dieter Rehbein [mailto:[email protected]]\nSent: Tuesday, April 02, 2013 4:52 AM\nTo: [email protected]\nSubject: Join between 2 tables always executes a sequential scan on the larger table\n\nHi everybody,\n\nin a project I have a performance problem, which I (and my colleagues) don't understand. It's a simple join between 2 of 3 tables:\n\ntable-1: user (id, user_name, ...). This table has about 1 million rows (999673 rows)\ntable-2: competition (57 rows)\ntable-3: user_2_competition. A relation between user and competition. This table has about 100.000 rows\n\nThe query is a join between table user_2_competition and user and looks like this:\n\nselect u.id, u.user_name\nfrom user_2_competition uc\n left join \"user\" u on u.id = uc.user_id\nwhere uc.competition_id = '3cc1cb9b3ac132ad013ad01316040001'\n\nThe query returns the ID and user_name of all users participating in a competition.\n\nWhat I don't understand: This query executes a sequential scan on user!\n\n\nThe tables have the following indexes:\n\nuser_2_competition: there is an index on user_id and an index on competition_id (competition_id is a VARCHAR(32) containing UUIDs)\nuser: id is the primary key and has therefore a unique index (the ID is a VARCHAR(32), which contains UUIDs).\n\nThe database has just been restored from a backup, I've executed ANALYZE for both tables.\n\nThe output of explain analyze (Postgres 9.2.3):\n\nHash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual time=1982.543..2737.331 rows=41333 loops=1)\n Hash Cond: ((uc.user_id)::text = (u.id)::text)\n -> Seq Scan on user_2_competition uc (cost=0.00..4705.21 rows=41396 width=33) (actual time=0.019..89.691 rows=41333 loops=1)\n Filter: ((competition_id)::text = '3cc1cb9b3ac132ad013ad01316040001'::text)\n Rows Removed by Filter: 80684\n -> Hash (cost=90074.73..90074.73 rows=999673 width=42) (actual time=1977.604..1977.604 rows=999673 loops=1)\n Buckets: 2048 Batches: 128 Memory Usage: 589kB\n -> Seq Scan on \"user\" u (cost=0.00..90074.73 rows=999673 width=42) (actual time=0.004..1178.827 rows=999673 loops=1)\nTotal runtime: 2740.723 ms\n\n\nI expected to see an index-scan on user_2_competition with a hash join to user, not a sequential scan on user. I've tried this with Postgres 9.1 and 9.2.3).\n\nAny ideas, what's going on here?\n\nWith EXPLAIN ANALYZE I can see, which query plan Postgres is using. Is there any way to find out, WHY postgres uses this query plan?\n\nbest regards\nDieter\n\n-----------------------------------------------\n\nDieter,\nwhy do you think index-scan on user_2_competition would be better?\n\nBased on huge number of rows returned (41333 out of total ~120000 in the table) from this table optimizer decided that Seq Scan is better than index scan.\nYou don't show QUERY TUNING parameters from Postgresql.conf, are they default?\nPlaying with optimizer parameters (lowering random_page_cost, lowering cpu_index_tuple_cost , increasing effective_cache_size, or just setting enable_seqscan = off), you could try to force \"optimizer\" to use index, and see if you are getting better results.\n\nRegards,\nIgor Neyman\n\nHappy Skiing!\n\nDieter Rehbein\nSoftware Architect | [email protected]\n\nSkiline Media GmbH\nLakeside B03\n9020 Klagenfurt, Austria\n\nfon: +43 463 249445-800\nfax: +43 463 249445-102\n\n\"Erlebe Skifahren neu!\"\n\nCONFIDENTIALITY: This e-mail and any attachments are confidential and may also be privileged. If you are not the designated recipient, please notify the sender immediately by reply e-mail and destroy all copies (digital and paper). Any unauthorized disclosure, distribution, copying, storage or use of the information contained in this e-mail or any attachments is strictly prohibited and may be unlawful.\n\n\nLEGAL: Skiline Media GmbH - Managing Director: Michael Saringer\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 3 Apr 2013 11:50:06 +0300",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join between 2 tables always executes a sequential scan\n\ton the larger table"
}
] |
[
{
"msg_contents": "Why is the following query getting wrong estimation of rows?\nI am using Postgresql 9.2.1 with default_statistics_target = 100.\nI execute vacuum analyze each night.\n\n explain analyze\nSELECT\nentity.id AS \"Leads_id\", entity.type AS \"Leads_type\" ,\nleads.firstname AS \"Leads_firstname\",\nleads.lastname AS \"Leads_lastname\"\nFROM leads\nINNER JOIN entity ON leads.leadid=entity.id\n LEFT JOIN groups ON groups.groupid = entity.smownerid\n LEFT join users ON entity.smownerid= users.id\nWHERE entity.type='Leads' AND entity.deleted=0 AND leads.converted=0\n Hash Join (cost=14067.90..28066.53 rows=90379 width=26) (actual\ntime=536.009..1772.910 rows=337139 loops=1)\n Hash Cond: (leads.leadid = entity.id)\n -> Seq Scan on leads (cost=0.00..7764.83 rows=533002 width=18) (actual\ntime=0.008..429.576 rows=532960 loops=1)\n Filter: (converted = 0)\n -> Hash (cost=9406.25..9406.25 rows=372932 width=16) (actual\ntime=535.800..535.800 rows=342369 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 16049kB\n -> Index Scan using entity_type_idx on entity\n (cost=0.00..9406.25 rows=372932 width=16) (actual time=0.030..305.250\nrows=342369 loops=1)\n Index Cond: ((type)::text = 'Leads'::text)\n\n\n\\d leads\n Table \"public.leads\"\n Column | Type | Modifiers\n\n------------------+------------------------+---------------------------------------\n leadid | integer | not null\n email | character varying(100) |\n interest | character varying(50) |\n firstname | character varying(100) |\n salutation | character varying(200) |\n lastname | character varying(100) | not null\n company | character varying(200) | not null\n annualrevenue | integer | default 0\n industry | character varying(200) |\n campaign | character varying(30) |\n rating | character varying(200) |\n leadstatus | character varying(50) |\n leadsource | character varying(200) |\n converted | integer | default 0\n designation | character varying(200) | default 'SalesMan'::character\nvarying\n licencekeystatus | character varying(50) |\n space | character varying(250) |\n comments | text |\n priority | character varying(50) |\n demorequest | character varying(50) |\n partnercontact | character varying(50) |\n productversion | character varying(20) |\n product | character varying(50) |\n maildate | date |\n nextstepdate | date |\n fundingsituation | character varying(50) |\n purpose | character varying(50) |\n evaluationstatus | character varying(50) |\n transferdate | date |\n revenuetype | character varying(50) |\n noofemployees | integer |\n yahooid | character varying(100) |\n assignleadchk | integer | default 0\n department | character varying(200) |\n emailoptout | character varying(3) | default 0\n siccode | character varying(50) |\nIndexes:\n \"leads_pkey\" PRIMARY KEY, btree (leadid)\n \"ftx_en_leads_company\" gin (to_tsvector('v_en'::regconfig,\nfor_fts(company::text)))\n \"ftx_en_leads_email\" gin (to_tsvector('v_en'::regconfig,\nfor_fts(email::text)))\n \"ftx_en_leads_emailoptout\" gin (to_tsvector('v_en'::regconfig,\nfor_fts(emailoptout::text)))\n \"ftx_en_leads_firstname\" gin (to_tsvector('v_en'::regconfig,\nfor_fts(firstname::text)))\n \"ftx_en_leads_lastname\" gin (to_tsvector('v_en'::regconfig,\nfor_fts(lastname::text)))\n \"ftx_en_leads_yahooid\" gin (to_tsvector('v_en'::regconfig,\nfor_fts(yahooid::text)))\n \"leads_converted_idx\" btree (converted)\n \"leads_leadsource_idx\" btree (leadsource)\n \"leads_leadstatus_idx\" btree (leadstatus)\n\n\n\n\\d entity\n Table \"public.entity\"\n Column | Type | Modifiers\n\n--------------------+-----------------------------+------------------------------\n id | integer | not null\n smcreatorid | integer | not null default 0\n smownerid | integer | not null default 0\n modifiedby | integer | not null default 0\n setype | character varying(30) | not null\n description | text |\n createdtime | timestamp without time zone | not null\n modifiedtime | timestamp without time zone | not null\n viewedtime | timestamp without time zone |\n status | character varying(50) |\n version | integer | not null default 0\n presence | integer | default 1\n deleted | integer | not null default 0\n owner_type | character(1) | not null default\n'U'::bpchar\n last_activity_date | timestamp without time zone |\nIndexes:\n \"entity_pkey\" PRIMARY KEY, btree (id)\n \"entity_createdtime_idx\" btree (createdtime)\n \"entity_modifiedby_idx\" btree (modifiedby)\n \"entity_modifiedtime_idx\" btree (modifiedtime)\n \"entity_setype_idx\" btree (setype) WHERE deleted = 0\n \"entity_smcreatorid_idx\" btree (smcreatorid)\n \"entity_smownerid_idx\" btree (smownerid)\n \"ftx_en_entity_description\" gin (to_tsvector('v_en'::regconfig,\nfor_fts(description)))\n \"entity_deleted_idx\" btree (deleted)\nReferenced by:\n TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid)\nREFERENCES entity(id) ON DELETE CASCADE\n TABLE \"servicecontracts\" CONSTRAINT \"fk_1_servicecontracts\" FOREIGN KEY\n(servicecontractsid) REFERENCES entity(id) ON DELETE CASCADE\n TABLE \"_emails\" CONSTRAINT \"fk__emails_id\" FOREIGN KEY (id) REFERENCES\nentity(id) ON DELETE CASCADE\n\n\nPlease advice.\n\nThanks.\n\nWhy is the following query getting wrong estimation of rows?I am using Postgresql 9.2.1 with default_statistics_target = 100.\nI execute vacuum analyze each night. explain analyze\n SELECT entity.id AS \"Leads_id\", entity.type AS \"Leads_type\" ,\n leads.firstname AS \"Leads_firstname\", leads.lastname AS \"Leads_lastname\" FROM leads\n INNER JOIN entity ON leads.leadid=entity.id LEFT JOIN groups ON groups.groupid = entity.smownerid\n LEFT join users ON entity.smownerid= users.id WHERE entity.type='Leads' AND entity.deleted=0 AND leads.converted=0\n Hash Join (cost=14067.90..28066.53 rows=90379 width=26) (actual time=536.009..1772.910 rows=337139 loops=1)\n Hash Cond: (leads.leadid = entity.id) -> Seq Scan on leads (cost=0.00..7764.83 rows=533002 width=18) (actual time=0.008..429.576 rows=532960 loops=1)\n Filter: (converted = 0) -> Hash (cost=9406.25..9406.25 rows=372932 width=16) (actual time=535.800..535.800 rows=342369 loops=1)\n Buckets: 65536 Batches: 1 Memory Usage: 16049kB -> Index Scan using entity_type_idx on entity (cost=0.00..9406.25 rows=372932 width=16) (actual time=0.030..305.250 rows=342369 loops=1)\n Index Cond: ((type)::text = 'Leads'::text)\n\\d leads Table \"public.leads\" Column | Type | Modifiers \n------------------+------------------------+--------------------------------------- leadid | integer | not null\n email | character varying(100) | interest | character varying(50) | firstname | character varying(100) | \n salutation | character varying(200) | lastname | character varying(100) | not null company | character varying(200) | not null\n annualrevenue | integer | default 0 industry | character varying(200) | campaign | character varying(30) | \n rating | character varying(200) | leadstatus | character varying(50) | leadsource | character varying(200) | \n converted | integer | default 0 designation | character varying(200) | default 'SalesMan'::character varying\n licencekeystatus | character varying(50) | space | character varying(250) | comments | text | \n priority | character varying(50) | demorequest | character varying(50) | partnercontact | character varying(50) | \n productversion | character varying(20) | product | character varying(50) | maildate | date | \n nextstepdate | date | fundingsituation | character varying(50) | purpose | character varying(50) | \n evaluationstatus | character varying(50) | transferdate | date | revenuetype | character varying(50) | \n noofemployees | integer | yahooid | character varying(100) | assignleadchk | integer | default 0\n department | character varying(200) | emailoptout | character varying(3) | default 0 siccode | character varying(50) | \nIndexes: \"leads_pkey\" PRIMARY KEY, btree (leadid) \"ftx_en_leads_company\" gin (to_tsvector('v_en'::regconfig, for_fts(company::text)))\n \"ftx_en_leads_email\" gin (to_tsvector('v_en'::regconfig, for_fts(email::text))) \"ftx_en_leads_emailoptout\" gin (to_tsvector('v_en'::regconfig, for_fts(emailoptout::text)))\n \"ftx_en_leads_firstname\" gin (to_tsvector('v_en'::regconfig, for_fts(firstname::text))) \"ftx_en_leads_lastname\" gin (to_tsvector('v_en'::regconfig, for_fts(lastname::text)))\n \"ftx_en_leads_yahooid\" gin (to_tsvector('v_en'::regconfig, for_fts(yahooid::text))) \"leads_converted_idx\" btree (converted)\n \"leads_leadsource_idx\" btree (leadsource) \"leads_leadstatus_idx\" btree (leadstatus)\n\\d entity Table \"public.entity\"\n Column | Type | Modifiers --------------------+-----------------------------+------------------------------\n id | integer | not null smcreatorid | integer | not null default 0\n smownerid | integer | not null default 0 modifiedby | integer | not null default 0\n setype | character varying(30) | not null description | text | createdtime | timestamp without time zone | not null\n modifiedtime | timestamp without time zone | not null viewedtime | timestamp without time zone | status | character varying(50) | \n version | integer | not null default 0 presence | integer | default 1\n deleted | integer | not null default 0 owner_type | character(1) | not null default 'U'::bpchar\n last_activity_date | timestamp without time zone | Indexes: \"entity_pkey\" PRIMARY KEY, btree (id)\n \"entity_createdtime_idx\" btree (createdtime) \"entity_modifiedby_idx\" btree (modifiedby) \"entity_modifiedtime_idx\" btree (modifiedtime)\n \"entity_setype_idx\" btree (setype) WHERE deleted = 0 \"entity_smcreatorid_idx\" btree (smcreatorid)\n \"entity_smownerid_idx\" btree (smownerid) \"ftx_en_entity_description\" gin (to_tsvector('v_en'::regconfig, for_fts(description)))\n \"entity_deleted_idx\" btree (deleted)Referenced by: TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid) REFERENCES entity(id) ON DELETE CASCADE\n TABLE \"servicecontracts\" CONSTRAINT \"fk_1_servicecontracts\" FOREIGN KEY (servicecontractsid) REFERENCES entity(id) ON DELETE CASCADE TABLE \"_emails\" CONSTRAINT \"fk__emails_id\" FOREIGN KEY (id) REFERENCES entity(id) ON DELETE CASCADE\n Please advice.\nThanks.",
"msg_date": "Tue, 2 Apr 2013 14:57:45 -0400",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner is getting wrong row count"
},
{
"msg_contents": "AI Rumman <[email protected]> wrote:\n\n> Why is the following query getting wrong estimation of rows?\n> I am using Postgresql 9.2.1 with default_statistics_target = 100.\n> I execute vacuum analyze each night.\n\n> Hash Join (cost=14067.90..28066.53 rows=90379 width=26) (actual time=536.009..1772.910 rows=337139 loops=1)\n> Hash Cond: (leads.leadid = entity.id)\n> -> Seq Scan on leads (cost=0.00..7764.83 rows=533002 width=18) (actual time=0.008..429.576 rows=532960 loops=1)\n> Filter: (converted = 0)\n> -> Hash (cost=9406.25..9406.25 rows=372932 width=16) (actual time=535.800..535.800 rows=342369 loops=1)\n> Buckets: 65536 Batches: 1 Memory Usage: 16049kB\n> -> Index Scan using entity_type_idx on entity (cost=0.00..9406.25 rows=372932 width=16) (actual time=0.030..305.250 rows=342369 loops=1)\n> Index Cond: ((type)::text = 'Leads'::text)\n\nThe estimates match up well with actual until the hash join. That\nsuggests that there is a correlation between the join conditions\nand the other selection criteria which the planner doesn't know\nabout. Right now PostgreSQL has no way to adjust estimates based\non such correlations. If an inefficient plan is being chosen due\nto this, there are a few tricks to coerce the plan.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 3 Apr 2013 08:34:33 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner is getting wrong row count"
}
] |
[
{
"msg_contents": "Hi,\n\nI have query that should be quick, and used to be quick, but is not \nanymore... Explain analyze can be seen here http://explain.depesz.com/s/cpV\nbut it is fundamentaly quick : Total runtime: 0.545 ms.\n\nBut query execution takes 3.6 seconds. Only 12 rows are returned. Adding \na limit 1 has no influence.\n\nPostgresql client and server are on the same server, on localhost.\n\nI wonder where my 3.5 seconds are going....\n\nNotice that there is no fancy configuration to tell Postgresql to \nevaluate every query plan on earth, that is gecoxxxx settings are on \ntheir default values.\n\nAny idea on what can be happening that takes so long ?\n\nFranck",
"msg_date": "Thu, 04 Apr 2013 16:48:31 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "What happens between end of explain analyze and end of query\n execution\n\t?"
},
{
"msg_contents": "We had a problem where the actual query planning time blew up and took way\nmore time then the query execution. We reproduced the problem by forming a\nnew connection and then just explaining the query. If that takes more than\na couple of milliseconds you likely have the problem. The second plan was\nfast.\n\nWe then dtraced the backend process for a new connection and found that\nopening files had become super slow. In our case this was caused by\nrunning the database on nfs.\n\nGood luck,\n\nNik\n\n\nOn Thu, Apr 4, 2013 at 10:48 AM, Franck Routier <[email protected]>wrote:\n\n> Hi,\n>\n> I have query that should be quick, and used to be quick, but is not\n> anymore... Explain analyze can be seen here http://explain.depesz.com/s/**\n> cpV <http://explain.depesz.com/s/cpV>\n> but it is fundamentaly quick : Total runtime: 0.545 ms.\n>\n> But query execution takes 3.6 seconds. Only 12 rows are returned. Adding a\n> limit 1 has no influence.\n>\n> Postgresql client and server are on the same server, on localhost.\n>\n> I wonder where my 3.5 seconds are going....\n>\n> Notice that there is no fancy configuration to tell Postgresql to evaluate\n> every query plan on earth, that is gecoxxxx settings are on their default\n> values.\n>\n> Any idea on what can be happening that takes so long ?\n>\n> Franck\n>\n>\n\nWe had a problem where the actual query planning time blew up and took way more time then the query execution. We reproduced the problem by forming a new connection and then just explaining the query. If that takes more than a couple of milliseconds you likely have the problem. The second plan was fast.\nWe then dtraced the backend process for a new connection and found that opening files had become super slow. In our case this was caused by running the database on nfs.\n\nGood luck,NikOn Thu, Apr 4, 2013 at 10:48 AM, Franck Routier <[email protected]> wrote:\nHi,\n\nI have query that should be quick, and used to be quick, but is not anymore... Explain analyze can be seen here http://explain.depesz.com/s/cpV\nbut it is fundamentaly quick : Total runtime: 0.545 ms.\n\nBut query execution takes 3.6 seconds. Only 12 rows are returned. Adding a limit 1 has no influence.\n\nPostgresql client and server are on the same server, on localhost.\n\nI wonder where my 3.5 seconds are going....\n\nNotice that there is no fancy configuration to tell Postgresql to evaluate every query plan on earth, that is gecoxxxx settings are on their default values.\n\nAny idea on what can be happening that takes so long ?\n\nFranck",
"msg_date": "Thu, 4 Apr 2013 11:01:33 -0400",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of\n\tquery execution ?"
},
{
"msg_contents": "Right, explain alone takes 3.6 seconds, so the time really seems to go \nquery planning...\n\n\nLe 04/04/2013 17:01, Nikolas Everett a écrit :\n> We had a problem where the actual query planning time blew up and took \n> way more time then the query execution. We reproduced the problem by \n> forming a new connection and then just explaining the query. If that \n> takes more than a couple of milliseconds you likely have the problem. \n> The second plan was fast.\n>\n> We then dtraced the backend process for a new connection and found \n> that opening files had become super slow. In our case this was caused \n> by running the database on nfs.\n>\n> Good luck,\n>\n> Nik\n>\n>\n> On Thu, Apr 4, 2013 at 10:48 AM, Franck Routier \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> I have query that should be quick, and used to be quick, but is\n> not anymore... Explain analyze can be seen here\n> http://explain.depesz.com/s/cpV\n> but it is fundamentaly quick : Total runtime: 0.545 ms.\n>\n> But query execution takes 3.6 seconds. Only 12 rows are returned.\n> Adding a limit 1 has no influence.\n>\n> Postgresql client and server are on the same server, on localhost.\n>\n> I wonder where my 3.5 seconds are going....\n>\n> Notice that there is no fancy configuration to tell Postgresql to\n> evaluate every query plan on earth, that is gecoxxxx settings are\n> on their default values.\n>\n> Any idea on what can be happening that takes so long ?\n>\n> Franck\n>\n>",
"msg_date": "Thu, 04 Apr 2013 17:44:05 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What happens between end of explain analyze and end\n\tof query execution ?"
},
{
"msg_contents": "Franck Routier <[email protected]> writes:\n> Right, explain alone takes 3.6 seconds, so the time really seems to go \n> query planning...\n\nWell, you've not shown us the query, so it's all going to be\nspeculation. But maybe you have some extremely expensive function that\nthe planner is evaluating to fold to a constant, or something like that?\nThe generated plan isn't terribly complicated, but we can't see what\nwas required to produce it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 04 Apr 2013 12:25:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of query\n\texecution ?"
},
{
"msg_contents": "Le 04/04/2013 18:25, Tom Lane a écrit :\n> Franck Routier <[email protected]> writes:\n>> Right, explain alone takes 3.6 seconds, so the time really seems to go\n>> query planning...\n> Well, you've not shown us the query, so it's all going to be\n> speculation. But maybe you have some extremely expensive function that\n> the planner is evaluating to fold to a constant, or something like that?\n> The generated plan isn't terribly complicated, but we can't see what\n> was required to produce it.\n>\n> \t\t\tregards, tom lane\n>\n>\nThe request is not using any function. It looks like this:\n\nSELECT *\n FROM sanrss\n LEFT JOIN sanrum ON sanrum.sanrum___rforefide = \nsanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide = \nsanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside = sanrss.sanrsside\n LEFT JOIN sanact ON sanact.sanact___rforefide = \nsanrum.sanrum___rforefide AND sanact.sanact___rfovsnide = \nsanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside = \nsanrum.sanrum___sanrsside AND sanact.sanact___sanrumide = \nsanrum.sanrumide AND sanact.sanact___sanrumide IS NOT NULL AND \nsanact.sanact___rsanopide='CCAM'\n LEFT JOIN sandia ON sandia.sandia___rforefide = \nsanrum.sanrum___rforefide AND sandia.sandia___rfovsnide = \nsanrum.sanrum___rfovsnide AND sandia.sandia___sanrsside = \nsanrum.sanrum___sanrsside AND sandia.sandia___sanrumide = \nsanrum.sanrumide AND sandia.sandiasig=1\n LEFT JOIN saneds ON sanrss.sanrss___rforefide = \nsaneds.saneds___rforefide AND sanrss.sanrss___rfovsnide = \nsaneds.saneds___rfovsnide AND sanrss.sanrss___sanedside = saneds.sanedside\n LEFT JOIN rsaidp ON saneds.saneds___rforefide = \nrsaidp.rsaidp___rforefide AND saneds.saneds___rsaidpide = rsaidp.rsaidpide\n WHERE sanrss.sanrss___rforefide = 'CHCL' AND \nsanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside = '1188100'\n ORDER BY sanrum.sanrumord, sanrum.sanrumide\n\nSchema looks like this :\nrsaidp\n |\n v\nsanrss --------\n | |\n v v\nsanrum sandia\n |\n v\nsanact\n\n\nPrimary keys are most made of several varchars. Foreign keys do exist. \nQuery is getting these data for one specific sanrss.\nThis used to take around 50ms to execute, and is now taking 3.5 seconds. \nAnd it looks like this is spent computing a query plan...\n\nI also tried: PREPARE qry(id) as select ....\nThe prepare takes 3.5 seconds. Execute qry(value) takes a few \nmilliseconds...\n\nRegards,\nFranck",
"msg_date": "Thu, 04 Apr 2013 20:49:24 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What happens between end of explain analyze and end\n\tof query execution ?"
},
{
"msg_contents": "On Apr 4, 2013, at 2:49 PM, Franck Routier <[email protected]> wrote:\n\n> Le 04/04/2013 18:25, Tom Lane a écrit :\n>> Franck Routier <[email protected]> writes:\n>>> Right, explain alone takes 3.6 seconds, so the time really seems to go \n>>> query planning...\n>> Well, you've not shown us the query, so it's all going to be\n>> speculation. But maybe you have some extremely expensive function that\n>> the planner is evaluating to fold to a constant, or something like that?\n>> The generated plan isn't terribly complicated, but we can't see what\n>> was required to produce it.\n>> \n>> \t\t\tregards, tom lane\n>> \n>> \n> The request is not using any function. It looks like this:\n> \n> SELECT *\n> FROM sanrss\n> LEFT JOIN sanrum ON sanrum.sanrum___rforefide = sanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide = sanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside = sanrss.sanrsside\n> LEFT JOIN sanact ON sanact.sanact___rforefide = sanrum.sanrum___rforefide AND sanact.sanact___rfovsnide = sanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside = sanrum.sanrum___sanrsside AND sanact.sanact___sanrumide = sanrum.sanrumide AND sanact.sanact___sanrumide IS NOT NULL AND sanact.sanact___rsanopide='CCAM'\n> LEFT JOIN sandia ON sandia.sandia___rforefide = sanrum.sanrum___rforefide AND sandia.sandia___rfovsnide = sanrum.sanrum___rfovsnide AND sandia.sandia___sanrsside = sanrum.sanrum___sanrsside AND sandia.sandia___sanrumide = sanrum.sanrumide AND sandia.sandiasig=1\n> LEFT JOIN saneds ON sanrss.sanrss___rforefide = saneds.saneds___rforefide AND sanrss.sanrss___rfovsnide = saneds.saneds___rfovsnide AND sanrss.sanrss___sanedside = saneds.sanedside\n> LEFT JOIN rsaidp ON saneds.saneds___rforefide = rsaidp.rsaidp___rforefide AND saneds.saneds___rsaidpide = rsaidp.rsaidpide\n> WHERE sanrss.sanrss___rforefide = 'CHCL' AND sanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside = '1188100'\n> ORDER BY sanrum.sanrumord, sanrum.sanrumide\n> \n> Schema looks like this :\n> rsaidp\n> |\n> v\n> sanrss --------\n> | |\n> v v\n> sanrum sandia\n> |\n> v\n> sanact\n> \n> \n> Primary keys are most made of several varchars. Foreign keys do exist. Query is getting these data for one specific sanrss.\n> This used to take around 50ms to execute, and is now taking 3.5 seconds. And it looks like this is spent computing a query plan...\n> \n> I also tried: PREPARE qry(id) as select ....\n> The prepare takes 3.5 seconds. Execute qry(value) takes a few milliseconds...\n> \n> Regards,\n> Franck\n\nIs it only this query or all queries?\nOn Apr 4, 2013, at 2:49 PM, Franck Routier <[email protected]> wrote:\n\nLe 04/04/2013 18:25, Tom Lane a écrit :\n\n\nFranck Routier <[email protected]> writes:\n\n\nRight, explain alone takes 3.6 seconds, so the time really seems to go \nquery planning...\n\n\nWell, you've not shown us the query, so it's all going to be\nspeculation. But maybe you have some extremely expensive function that\nthe planner is evaluating to fold to a constant, or something like that?\nThe generated plan isn't terribly complicated, but we can't see what\nwas required to produce it.\n\n\t\t\tregards, tom lane\n\n\n\n\nThe request is not using any function. It looks like this:\n\nSELECT *\n FROM sanrss\n LEFT JOIN sanrum ON sanrum.sanrum___rforefide =\n sanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide =\n sanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside =\n sanrss.sanrsside\n LEFT JOIN sanact ON sanact.sanact___rforefide =\n sanrum.sanrum___rforefide AND sanact.sanact___rfovsnide =\n sanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside =\n sanrum.sanrum___sanrsside AND sanact.sanact___sanrumide =\n sanrum.sanrumide AND sanact.sanact___sanrumide IS NOT NULL AND\n sanact.sanact___rsanopide='CCAM'\n LEFT JOIN sandia ON sandia.sandia___rforefide =\n sanrum.sanrum___rforefide AND sandia.sandia___rfovsnide =\n sanrum.sanrum___rfovsnide AND sandia.sandia___sanrsside =\n sanrum.sanrum___sanrsside AND sandia.sandia___sanrumide =\n sanrum.sanrumide AND sandia.sandiasig=1\n LEFT JOIN saneds ON sanrss.sanrss___rforefide =\n saneds.saneds___rforefide AND sanrss.sanrss___rfovsnide =\n saneds.saneds___rfovsnide AND sanrss.sanrss___sanedside =\n saneds.sanedside\n LEFT JOIN rsaidp ON saneds.saneds___rforefide =\n rsaidp.rsaidp___rforefide AND saneds.saneds___rsaidpide =\n rsaidp.rsaidpide\n WHERE sanrss.sanrss___rforefide = 'CHCL' AND\n sanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside\n = '1188100'\n ORDER BY sanrum.sanrumord, sanrum.sanrumide\n\nSchema looks like this :\n rsaidp\n |\n v\n sanrss --------\n | |\n v v\n sanrum sandia\n |\n v\n sanact\n\n\nPrimary keys are most made of several varchars. Foreign\n keys do exist. Query is getting these data for one specific\n sanrss.\nThis used to take around 50ms to execute, and is now taking\n 3.5 seconds. And it looks like this is spent computing a query\n plan...\n\nI also tried: PREPARE qry(id) as select ....\nThe prepare takes 3.5 seconds. Execute qry(value) takes a\n few milliseconds...\n\nRegards,\nFranck\nIs it only this query or all queries?",
"msg_date": "Thu, 4 Apr 2013 15:05:52 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of query\n\texecution ?"
},
{
"msg_contents": "Franck Routier <[email protected]> writes:\n> The request is not using any function. It looks like this:\n> [ unexciting query ]\n\nHmph. Can't see any reason for that to take a remarkably long time to\nplan. Can you put together a self-contained test case demonstrating\nexcessive planning time? What PG version is this, anyway?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 04 Apr 2013 15:08:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of query\n\texecution ?"
},
{
"msg_contents": "On Thu, Apr 4, 2013 at 9:48 AM, Franck Routier <[email protected]> wrote:\n> Hi,\n>\n> I have query that should be quick, and used to be quick, but is not\n> anymore... Explain analyze can be seen here http://explain.depesz.com/s/cpV\n> but it is fundamentaly quick : Total runtime: 0.545 ms.\n>\n> But query execution takes 3.6 seconds. Only 12 rows are returned. Adding a\n> limit 1 has no influence.\n>\n> Postgresql client and server are on the same server, on localhost.\n>\n> I wonder where my 3.5 seconds are going....\n\nAlso, 3.6 seconds according to what exactly? For example if your 12\nrows contain megabytes of bytea data that would be a possible cause\n(albeit unlikely) since explain doesn't include network transfer time.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 4 Apr 2013 14:19:23 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of\n\tquery execution ?"
},
{
"msg_contents": "Le 04/04/2013 21:08, Tom Lane a écrit :\n> Franck Routier <[email protected]> writes:\n>> The request is not using any function. It looks like this:\n>> [ unexciting query ]\n> Hmph. Can't see any reason for that to take a remarkably long time to\n> plan. Can you put together a self-contained test case demonstrating\n> excessive planning time? What PG version is this, anyway?\n>\n> \t\t\tregards, tom lane\n>\n>\nWell, I don't know how to reproduce, as it is really only happening on \nthis database.\n\nWhat I notice is that removing joins has a huge impact on the time \nexplain takes to return:\n\nThe full query takes 2.6 seconds to return. Notice it has dropped from \n3.6 seconds to 2.6 since yesterday after I did a vacuum analyze on the \ntables that go into the request.\n\nEXPLAIN SELECT *\n FROM sanrss\n LEFT JOIN sanrum ON sanrum.sanrum___rforefide = \nsanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide = \nsanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside = sanrss.sanrsside\n LEFT JOIN sanact ON sanact.sanact___rforefide = \nsanrum.sanrum___rforefide AND sanact.sanact___rfovsnide = \nsanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside = \nsanrum.sanrum___sanrsside AND sanact.sanact___sanrumide = \nsanrum.sanrumide AND sanact.sanact___sanrumide IS NOT NULL AND \nsanact.sanact___rsanopide='CCAM'\n LEFT JOIN sandia ON sandia.sandia___rforefide = \nsanrum.sanrum___rforefide AND sandia.sandia___rfovsnide = \nsanrum.sanrum___rfovsnide AND sandia.sandia___sanrsside = \nsanrum.sanrum___sanrsside AND sandia.sandia___sanrumide = \nsanrum.sanrumide AND sandia.sandiasig=1\n LEFT JOIN saneds ON sanrss.sanrss___rforefide = \nsaneds.saneds___rforefide AND sanrss.sanrss___rfovsnide = \nsaneds.saneds___rfovsnide AND sanrss.sanrss___sanedside = saneds.sanedside\n LEFT JOIN rsaidp ON saneds.saneds___rforefide = \nrsaidp.rsaidp___rforefide AND saneds.saneds___rsaidpide = rsaidp.rsaidpide\n WHERE sanrss.sanrss___rforefide = 'CHCL' AND \nsanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside = '1188100'\n ORDER BY sanrum.sanrumord, sanrum.sanrumide\n\n==> 2.6 seconds\n\nIf I remove the join on either table 'sandia' or table 'saneds', the \nexplain return in 1.2 seconds. If I remove both, explain returns in 48ms.\n\nEXPLAIN SELECT *\n FROM sanrss\n LEFT JOIN sanrum ON sanrum.sanrum___rforefide = \nsanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide = \nsanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside = sanrss.sanrsside\n LEFT JOIN sanact ON sanact.sanact___rforefide = \nsanrum.sanrum___rforefide AND sanact.sanact___rfovsnide = \nsanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside = \nsanrum.sanrum___sanrsside AND sanact.sanact___sanrumide = \nsanrum.sanrumide AND sanact.sanact___sanrumide IS NOT NULL AND \nsanact.sanact___rsanopide='CCAM'\n LEFT JOIN saneds ON sanrss.sanrss___rforefide = \nsaneds.saneds___rforefide AND sanrss.sanrss___rfovsnide = \nsaneds.saneds___rfovsnide AND sanrss.sanrss___sanedside = saneds.sanedside\n LEFT JOIN rsaidp ON saneds.saneds___rforefide = \nrsaidp.rsaidp___rforefide AND saneds.saneds___rsaidpide = rsaidp.rsaidpide\n WHERE sanrss.sanrss___rforefide = 'CHCL' AND \nsanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside = '1188100'\n ORDER BY sanrum.sanrumord, sanrum.sanrumide\n\n==> 1.2 seconds\n\nEXPLAIN SELECT *\n FROM sanrss\n LEFT JOIN sanrum ON sanrum.sanrum___rforefide = \nsanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide = \nsanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside = sanrss.sanrsside\n LEFT JOIN sanact ON sanact.sanact___rforefide = \nsanrum.sanrum___rforefide AND sanact.sanact___rfovsnide = \nsanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside = \nsanrum.sanrum___sanrsside AND sanact.sanact___sanrumide = \nsanrum.sanrumide AND sanact.sanact___sanrumide IS NOT NULL AND \nsanact.sanact___rsanopide='CCAM'\n LEFT JOIN sandia ON sandia.sandia___rforefide = \nsanrum.sanrum___rforefide AND sandia.sandia___rfovsnide = \nsanrum.sanrum___rfovsnide AND sandia.sandia___sanrsside = \nsanrum.sanrum___sanrsside AND sandia.sandia___sanrumide = \nsanrum.sanrumide AND sandia.sandiasig=1\n WHERE sanrss.sanrss___rforefide = 'CHCL' AND \nsanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside = '1188100'\n ORDER BY sanrum.sanrumord, sanrum.sanrumide\n\n==> 1.2 seconds\n\nEXPLAIN SELECT *\n FROM sanrss\n LEFT JOIN sanrum ON sanrum.sanrum___rforefide = \nsanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide = \nsanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside = sanrss.sanrsside\n LEFT JOIN sanact ON sanact.sanact___rforefide = \nsanrum.sanrum___rforefide AND sanact.sanact___rfovsnide = \nsanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside = \nsanrum.sanrum___sanrsside AND sanact.sanact___sanrumide = \nsanrum.sanrumide AND sanact.sanact___sanrumide IS NOT NULL AND \nsanact.sanact___rsanopide='CCAM'\n WHERE sanrss.sanrss___rforefide = 'CHCL' AND \nsanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside = '1188100'\n ORDER BY sanrum.sanrumord, sanrum.sanrumide\n\n==> 48 ms\n\nMaybe the statistics tables for sandia and saneds are in a bad shape ? \n(don't know how to check this).\n\nRegards,\n\nFranck",
"msg_date": "Fri, 05 Apr 2013 15:55:08 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What happens between end of explain analyze and end\n\tof query execution ?"
},
{
"msg_contents": "Franck Routier <[email protected]> writes:\n> Le 04/04/2013 21:08, Tom Lane a �crit :\n>> Hmph. Can't see any reason for that to take a remarkably long time to\n>> plan. Can you put together a self-contained test case demonstrating\n>> excessive planning time? What PG version is this, anyway?\n\n> What I notice is that removing joins has a huge impact on the time \n> explain takes to return:\n\nHm, kind of looks like it's just taking an unreasonable amount of time\nto process each join clause. What have you got the statistics targets\nset to in this database? What are the datatypes of the join columns?\nAnd (again) what PG version is this exactly?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 05 Apr 2013 10:17:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of query\n\texecution ?"
},
{
"msg_contents": "On Fri, Apr 5, 2013 at 8:55 AM, Franck Routier <[email protected]> wrote:\n> Le 04/04/2013 21:08, Tom Lane a écrit :\n>\n>> Franck Routier <[email protected]> writes:\n>>>\n>>> The request is not using any function. It looks like this:\n>>> [ unexciting query ]\n>>\n>> Hmph. Can't see any reason for that to take a remarkably long time to\n>> plan. Can you put together a self-contained test case demonstrating\n>> excessive planning time? What PG version is this, anyway?\n>>\n>> regards, tom lane\n>>\n>>\n> Well, I don't know how to reproduce, as it is really only happening on this\n> database.\n>\n> What I notice is that removing joins has a huge impact on the time explain\n> takes to return:\n>\n> The full query takes 2.6 seconds to return. Notice it has dropped from 3.6\n> seconds to 2.6 since yesterday after I did a vacuum analyze on the tables\n> that go into the request.\n>\n> EXPLAIN SELECT *\n>\n> FROM sanrss\n> LEFT JOIN sanrum ON sanrum.sanrum___rforefide =\n> sanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide =\n> sanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside = sanrss.sanrsside\n> LEFT JOIN sanact ON sanact.sanact___rforefide =\n> sanrum.sanrum___rforefide AND sanact.sanact___rfovsnide =\n> sanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside =\n> sanrum.sanrum___sanrsside AND sanact.sanact___sanrumide = sanrum.sanrumide\n> AND sanact.sanact___sanrumide IS NOT NULL AND\n> sanact.sanact___rsanopide='CCAM'\n> LEFT JOIN sandia ON sandia.sandia___rforefide =\n> sanrum.sanrum___rforefide AND sandia.sandia___rfovsnide =\n> sanrum.sanrum___rfovsnide AND sandia.sandia___sanrsside =\n> sanrum.sanrum___sanrsside AND sandia.sandia___sanrumide = sanrum.sanrumide\n> AND sandia.sandiasig=1\n> LEFT JOIN saneds ON sanrss.sanrss___rforefide =\n> saneds.saneds___rforefide AND sanrss.sanrss___rfovsnide =\n> saneds.saneds___rfovsnide AND sanrss.sanrss___sanedside = saneds.sanedside\n> LEFT JOIN rsaidp ON saneds.saneds___rforefide =\n> rsaidp.rsaidp___rforefide AND saneds.saneds___rsaidpide = rsaidp.rsaidpide\n> WHERE sanrss.sanrss___rforefide = 'CHCL' AND\n> sanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside = '1188100'\n> ORDER BY sanrum.sanrumord, sanrum.sanrumide\n>\n> ==> 2.6 seconds\n>\n> If I remove the join on either table 'sandia' or table 'saneds', the explain\n> return in 1.2 seconds. If I remove both, explain returns in 48ms.\n>\n> EXPLAIN SELECT *\n>\n> FROM sanrss\n> LEFT JOIN sanrum ON sanrum.sanrum___rforefide =\n> sanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide =\n> sanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside = sanrss.sanrsside\n> LEFT JOIN sanact ON sanact.sanact___rforefide =\n> sanrum.sanrum___rforefide AND sanact.sanact___rfovsnide =\n> sanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside =\n> sanrum.sanrum___sanrsside AND sanact.sanact___sanrumide = sanrum.sanrumide\n> AND sanact.sanact___sanrumide IS NOT NULL AND\n> sanact.sanact___rsanopide='CCAM'\n> LEFT JOIN saneds ON sanrss.sanrss___rforefide =\n> saneds.saneds___rforefide AND sanrss.sanrss___rfovsnide =\n> saneds.saneds___rfovsnide AND sanrss.sanrss___sanedside = saneds.sanedside\n> LEFT JOIN rsaidp ON saneds.saneds___rforefide =\n> rsaidp.rsaidp___rforefide AND saneds.saneds___rsaidpide = rsaidp.rsaidpide\n> WHERE sanrss.sanrss___rforefide = 'CHCL' AND\n> sanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside = '1188100'\n> ORDER BY sanrum.sanrumord, sanrum.sanrumide\n>\n> ==> 1.2 seconds\n>\n> EXPLAIN SELECT *\n>\n> FROM sanrss\n> LEFT JOIN sanrum ON sanrum.sanrum___rforefide =\n> sanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide =\n> sanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside = sanrss.sanrsside\n> LEFT JOIN sanact ON sanact.sanact___rforefide =\n> sanrum.sanrum___rforefide AND sanact.sanact___rfovsnide =\n> sanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside =\n> sanrum.sanrum___sanrsside AND sanact.sanact___sanrumide = sanrum.sanrumide\n> AND sanact.sanact___sanrumide IS NOT NULL AND\n> sanact.sanact___rsanopide='CCAM'\n> LEFT JOIN sandia ON sandia.sandia___rforefide =\n> sanrum.sanrum___rforefide AND sandia.sandia___rfovsnide =\n> sanrum.sanrum___rfovsnide AND sandia.sandia___sanrsside =\n> sanrum.sanrum___sanrsside AND sandia.sandia___sanrumide = sanrum.sanrumide\n> AND sandia.sandiasig=1\n> WHERE sanrss.sanrss___rforefide = 'CHCL' AND\n> sanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside = '1188100'\n> ORDER BY sanrum.sanrumord, sanrum.sanrumide\n>\n> ==> 1.2 seconds\n>\n> EXPLAIN SELECT *\n>\n> FROM sanrss\n> LEFT JOIN sanrum ON sanrum.sanrum___rforefide =\n> sanrss.sanrss___rforefide AND sanrum.sanrum___rfovsnide =\n> sanrss.sanrss___rfovsnide AND sanrum.sanrum___sanrsside = sanrss.sanrsside\n> LEFT JOIN sanact ON sanact.sanact___rforefide =\n> sanrum.sanrum___rforefide AND sanact.sanact___rfovsnide =\n> sanrum.sanrum___rfovsnide AND sanact.sanact___sanrsside =\n> sanrum.sanrum___sanrsside AND sanact.sanact___sanrumide = sanrum.sanrumide\n> AND sanact.sanact___sanrumide IS NOT NULL AND\n> sanact.sanact___rsanopide='CCAM'\n> WHERE sanrss.sanrss___rforefide = 'CHCL' AND\n> sanrss.sanrss___rfovsnide = '201012_600' AND sanrss.sanrsside = '1188100'\n> ORDER BY sanrum.sanrumord, sanrum.sanrumide\n>\n> ==> 48 ms\n>\n> Maybe the statistics tables for sandia and saneds are in a bad shape ?\n> (don't know how to check this).\n\n\nOk, \"explain\" (without analyze) is measuring plan time only (not\nexecution time). Can you confirm that's the time we are measuring\n(and again, according to what)? Performance issues here are a\ndifferent ball game. Please supply precise version#, there were a\ncouple of plantime bugs fixed recently.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 5 Apr 2013 09:17:45 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of\n\tquery execution ?"
},
{
"msg_contents": "On Fri, Apr 5, 2013 at 9:55 AM, Franck Routier <[email protected]>wrote:\n\n> Le 04/04/2013 21:08, Tom Lane a écrit :\n> Maybe the statistics tables for sandia and saneds are in a bad shape ?\n> (don't know how to check this).\n>\n> Regards,\n>\n> Franck\n>\n>\n\nCould this be caused by system table bloat?\n\nAlso, can you check how long it takes to plan:\n1. A query without a table at all (SELECT NOW())\n2. A query with an unrelated table\n\nAgain, what version of PostgreSQL is this?\n\nOn Fri, Apr 5, 2013 at 9:55 AM, Franck Routier <[email protected]> wrote:\nLe 04/04/2013 21:08, Tom Lane a écrit :\nMaybe the statistics tables for sandia and saneds are in a bad shape ? (don't know how to check this).\n\nRegards,\n\nFranck\n\nCould this be caused by system table bloat?Also, can you check how long it takes to plan:\n1. A query without a table at all (SELECT NOW())2. A query with an unrelated table\n\nAgain, what version of PostgreSQL is this?",
"msg_date": "Fri, 5 Apr 2013 10:18:01 -0400",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of\n\tquery execution ?"
},
{
"msg_contents": "Le 05/04/2013 16:17, Tom Lane a écrit :\n> Franck Routier <[email protected]> writes:\n>> Le 04/04/2013 21:08, Tom Lane a écrit :\n>>> Hmph. Can't see any reason for that to take a remarkably long time to\n>>> plan. Can you put together a self-contained test case demonstrating\n>>> excessive planning time? What PG version is this, anyway?\n>> What I notice is that removing joins has a huge impact on the time\n>> explain takes to return:\n> Hm, kind of looks like it's just taking an unreasonable amount of time\n> to process each join clause. What have you got the statistics targets\n> set to in this database? What are the datatypes of the join columns?\n> And (again) what PG version is this exactly?\n>\n> \t\t\tregards, tom lane\n>\n>\nPostgresql version is 8.4.8.\nValue for default_statistics_target is 5000 (maybe this is the problem ?)\nJoin columns datatypes are all varchar(32).\n\nAlso notice that not all joins have equals imppact on the time taken: \nremoving join on some tables has no impact (sanact, rsaidp), while \nremoving joins on others (saneds, sandia) has an important effect...\n\nRegards,\nFranck",
"msg_date": "Sat, 06 Apr 2013 10:02:39 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What happens between end of explain analyze and end\n\tof query execution ?"
},
{
"msg_contents": "Le 05/04/2013 16:17, Merlin Moncure a écrit :\n>\n> Ok, \"explain\" (without analyze) is measuring plan time only (not\n> execution time). Can you confirm that's the time we are measuring\n> (and again, according to what)? Performance issues here are a\n> different ball game. Please supply precise version#, there were a\n> couple of plantime bugs fixed recently.\n>\n> merlin\nYes, I confirm time is taken by analyze alone. Executing the query is quick.\nPG version is 8.4.8.\n\nFranck",
"msg_date": "Sat, 06 Apr 2013 10:04:09 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What happens between end of explain analyze and end\n\tof query execution ?"
},
{
"msg_contents": "Le 05/04/2013 16:18, Nikolas Everett a écrit :\n> On Fri, Apr 5, 2013 at 9:55 AM, Franck Routier \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Le 04/04/2013 21:08, Tom Lane a écrit :\n> Maybe the statistics tables for sandia and saneds are in a bad\n> shape ? (don't know how to check this).\n>\n> Regards,\n>\n> Franck\n>\n>\n>\n> Could this be caused by system table bloat?\n>\n> Also, can you check how long it takes to plan:\n> 1. A query without a table at all (SELECT NOW())\n> 2. A query with an unrelated table\n>\n> Again, what version of PostgreSQL is this?\n\nPG 8.4.8\nselect now() is quick (15ms from pgadmin)\nBut I can reproduce the problem on other tables : explain on a query \nwith 4 join takes 4.5 seconds on an other set of tables...\n\nSystem bloat... maybe. Not sure how to check. But as Tom asked, \ndefault_statistics_target is 5000. Maybe the problem is here ? What \nshould I look after in pg_statistic to tell is there is a prolem here ?\n\nRegards,\nFranck",
"msg_date": "Sat, 06 Apr 2013 10:13:37 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What happens between end of explain analyze and end\n\tof query execution ?"
},
{
"msg_contents": "Le 05/04/2013 16:17, Tom Lane a écrit :\n> What have you got the statistics targets set to in this database?\n\nOk, the problem definitely comes from the default_statistics_target \nwhich is obviously too high on the database.\n\nI have experimented with explain on queries with another set of 4 joined \ntables.\nIn my first attempt, explain took more than 4 seconds (!)\nThen I have set default_statistics_target to 100 and analyzed the 4 \ntables. Explain took a few ms.\nRestored default_statistics_target to 5000, analyzed again. Explain took \n1.8 seconds.\n\nSo, I seem to have two related problems: statistics are somewhat bloated \n(as re-analyzing with same target takes the explain time from 4 sec down \nto 1.8 sec).\nAnd the target is far too high (as default target value take analyse \ndown to a few ms).\n\nNow... can someone help me understand what happens ? Where can I look \n(in pg_stats ?) to see the source of the problem ? maybe a column with a \nhuge list of different values the palnner has to parse ?\n\n\nRegards,\nFranck",
"msg_date": "Sat, 06 Apr 2013 11:05:11 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What happens between end of explain analyze and end\n\tof query execution ?"
},
{
"msg_contents": "Franck Routier <[email protected]> wrote:\n\n> Ok, the problem definitely comes from the\n> default_statistics_target which is obviously too high on the\n> database.\n\n> Now... can someone help me understand what happens ? Where can I\n> look (in pg_stats ?) to see the source of the problem ? maybe a\n> column with a huge list of different values the palnner has to\n> parse ?\n\nThis is a fundamental issue in query planning -- how much work do\nyou want to do to try to come up with the best plan? Too little,\nand the plan can be unacceptably slow; too much and you spend more\nextra time on planning than the improvement in the plan (if any)\nsaves you. Reading and processing statistics gets more expensive\nas you boost the volume.\n\nWhat I would suggest is taking the default_statistics_target for\nthe cluster back down to the default, and selectviely boosting the\nstatistics target for individual columns as you find plans which\nbenefit. Don't set it right at the edge of the tipping point, but\ndon't automatically jump to 5000 every time either.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 6 Apr 2013 08:26:31 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of query\n\texecution ?"
},
{
"msg_contents": "Franck Routier <[email protected]> writes:\n> Le 05/04/2013 16:17, Tom Lane a �crit :\n>> What have you got the statistics targets set to in this database?\n\n> Ok, the problem definitely comes from the default_statistics_target \n> which is obviously too high on the database.\n\nYeah, eqjoinsel() is O(N^2) in the lengths of the MCV lists, in the\nworst case where there's little overlap in the list memberships.\nThe actual cost would depend a lot on the specific column datatypes.\n\nNot sure about your report that re-analyzing with the same stats target\nmade a significant difference. It might have been a matter of chance\nvariation in the sampled MCV list leading to more or fewer matches.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 06 Apr 2013 12:27:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of query\n\texecution ?"
},
{
"msg_contents": "Le 06/04/2013 18:27, Tom Lane a écrit :\n>> Ok, the problem definitely comes from the default_statistics_target\n>> which is obviously too high on the database.\n> Yeah, eqjoinsel() is O(N^2) in the lengths of the MCV lists, in the\n> worst case where there's little overlap in the list memberships.\n> The actual cost would depend a lot on the specific column datatypes.\n>\n> Not sure about your report that re-analyzing with the same stats target\n> made a significant difference. It might have been a matter of chance\n> variation in the sampled MCV list leading to more or fewer matches.\n>\n> \t\t\tregards, tom lane\n>\n>\nThank you all for your help, I appreciate it really much.\n\nJust a last note: maybe the documentation could draw the attention on \nthis side effect of high statistics target.\nRight now it says:\n \"Larger values increase the time needed to do ANALYZE, but might \nimprove the quality of the planner's estimates\" \n(http://www.postgresql.org/docs/9.2/static/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET)\nand: \"Raising the limit might allow more accurate planner estimates to \nbe made, particularly for columns with irregular data distributions, at \nthe price of consuming more space in pg_statistic and slightly more time \nto compute the estimates.\" \n(http://www.postgresql.org/docs/9.2/static/planner-stats.html).\n\nIt could be noted that a too high target can also have a noticeable cost \non query planning.\n\nBest regards,\nFranck",
"msg_date": "Sun, 07 Apr 2013 14:12:33 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What happens between end of explain analyze and end\n\tof query execution ?"
},
{
"msg_contents": "On Sat, Apr 6, 2013 at 9:27 AM, Tom Lane <[email protected]> wrote:\n\n> Franck Routier <[email protected]> writes:\n> > Le 05/04/2013 16:17, Tom Lane a écrit :\n> >> What have you got the statistics targets set to in this database?\n>\n> > Ok, the problem definitely comes from the default_statistics_target\n> > which is obviously too high on the database.\n>\n> Yeah, eqjoinsel() is O(N^2) in the lengths of the MCV lists, in the\n> worst case where there's little overlap in the list memberships.\n> The actual cost would depend a lot on the specific column datatypes.\n>\n\n\nI guess this pre-emptively answers a question I was intending to ask on\nperformance: Whether anyone increased default_statistics_target and came\nto regret it. I had seen several problems fixed by increasing\ndefault_statistics_target, but this is the first one I recall caused by\nincreasing it.\n\nDo you think fixing the O(N^2) behavior would be a good to-do item?\n\nCheers,\n\nJeff\n\nOn Sat, Apr 6, 2013 at 9:27 AM, Tom Lane <[email protected]> wrote:\nFranck Routier <[email protected]> writes:\n> Le 05/04/2013 16:17, Tom Lane a écrit :\n>> What have you got the statistics targets set to in this database?\n\n> Ok, the problem definitely comes from the default_statistics_target\n> which is obviously too high on the database.\n\nYeah, eqjoinsel() is O(N^2) in the lengths of the MCV lists, in the\nworst case where there's little overlap in the list memberships.\nThe actual cost would depend a lot on the specific column datatypes.I guess this pre-emptively answers a question I was intending to ask on performance: Whether anyone increased default_statistics_target and came to regret it. I had seen several problems fixed by increasing default_statistics_target, but this is the first one I recall caused by increasing it.\nDo you think fixing the O(N^2) behavior would be a good to-do item?\nCheers,Jeff",
"msg_date": "Mon, 8 Apr 2013 09:39:01 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of\n\tquery execution ?"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> On Sat, Apr 6, 2013 at 9:27 AM, Tom Lane <[email protected]> wrote:\n>> Yeah, eqjoinsel() is O(N^2) in the lengths of the MCV lists, in the\n>> worst case where there's little overlap in the list memberships.\n\n> I guess this pre-emptively answers a question I was intending to ask on\n> performance: Whether anyone increased default_statistics_target and came\n> to regret it. I had seen several problems fixed by increasing\n> default_statistics_target, but this is the first one I recall caused by\n> increasing it.\n\nI recall having heard some similar complaints before, but not often.\n\n> Do you think fixing the O(N^2) behavior would be a good to-do item?\n\nIf you can think of a way to do it that doesn't create new assumptions\nthat eqjoinsel ought not make (say, that the datatype is sortable).\n\nI guess one possibility is to have a different join selectivity function\nfor those types that *are* sortable, which would fix the issue for most\ncommonly used types.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 08 Apr 2013 12:46:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What happens between end of explain analyze and end of query\n\texecution ?"
}
] |
[
{
"msg_contents": "Hi All,\n\nHoping someone can help me out with some performance issues I'm having with\nthe INDEX on my database. I've got a database that has a data table\ncontaining ~55,000,000 rows which have point data and an area table\ncontaining ~3,500 rows which have polygon data. A user queries the data by\nselecting what areas they want to view and using some other filters such as\ndatatime and what datasets they want to query. This all works fine and\npreviously the intersect of the data rows to the areas was being done on\nthe fly with PostGIS ST_Intersects. However as the data table grow we\ndecided it would make sense to offload the data processing and not\ncalculate the intersect for a row on the fly each time, but to\npre-calculate it and store the result in the join table. Resultantly this\nproduce a table data_area which contains ~250,000,000 rows. This simply has\ntwo columns which show the intersect between data and area. We where\nexpecting that this would give a significant performance improvement to\nquery time, but the query seems to take a very long time to analyse the\nINDEX as part of the query. I'm thinking there must be something wrong with\nmy setup or the query its self as I'm sure postgres will perform better.\nI've tried restructuring the query, changing config settings and doing\nmaintenance like VACUUM but nothing has helped.\n\nHope that introduction is clear enough and makes sense if anything is\nunclear please let me know.\n\nI'm using PostgreSQL 9.1.4 on x86_64-unknown-linux-gnu, compiled by\ngcc-4.4.real (Ubuntu 4.4.3-4ubuntu5.1) 4.4.3, 64-bit on Ubuntu 12.04 which\nwas installed using apt.\n\nHere is the structure of my database tables\n\nCREATE TABLE data\n(\n id bigserial NOT NULL,\n datasetid integer NOT NULL,\n readingdatetime timestamp without time zone NOT NULL,\n depth double precision NOT NULL,\n readingdatetime2 timestamp without time zone,\n depth2 double precision,\n value double precision NOT NULL,\n uploaddatetime timestamp without time zone,\n description character varying(255),\n point geometry,\n point2 geometry,\n CONSTRAINT \"DATAPRIMARYKEY\" PRIMARY KEY (id ),\n CONSTRAINT enforce_dims_point CHECK (st_ndims(point) = 2),\n CONSTRAINT enforce_dims_point2 CHECK (st_ndims(point2) = 2),\n CONSTRAINT enforce_geotype_point CHECK (geometrytype(point) =\n'POINT'::text OR point IS NULL),\n CONSTRAINT enforce_geotype_point2 CHECK (geometrytype(point2) =\n'POINT'::text OR point2 IS NULL),\n CONSTRAINT enforce_srid_point CHECK (st_srid(point) = 4326),\n CONSTRAINT enforce_srid_point2 CHECK (st_srid(point2) = 4326)\n);\n\nCREATE INDEX data_datasetid_index ON data USING btree (datasetid );\nCREATE INDEX data_point_index ON data USING gist (point );\nCREATE INDEX \"data_readingDatetime_index\" ON data USING btree\n(readingdatetime );\nALTER TABLE data CLUSTER ON \"data_readingDatetime_index\";\n\nCREATE TABLE area\n(\n id serial NOT NULL,\n \"areaCode\" character varying(10) NOT NULL,\n country character varying(250) NOT NULL,\n \"polysetID\" integer NOT NULL,\n polygon geometry,\n CONSTRAINT area_primary_key PRIMARY KEY (id ),\n CONSTRAINT polyset_foreign_key FOREIGN KEY (\"polysetID\")\n REFERENCES polyset (id) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE CASCADE,\n CONSTRAINT enforce_dims_area CHECK (st_ndims(polygon) = 2),\n CONSTRAINT enforce_geotype_area CHECK (geometrytype(polygon) =\n'POLYGON'::text OR polygon IS NULL),\n CONSTRAINT enforce_srid_area CHECK (st_srid(polygon) = 4326)\n);\n\nCREATE INDEX area_polygon_index ON area USING gist (polygon );\nCREATE INDEX \"area_polysetID_index\" ON area USING btree (\"polysetID\" );\nALTER TABLE area CLUSTER ON \"area_polysetID_index\";\n\nCREATE TABLE data_area\n(\n data_id integer NOT NULL,\n area_id integer NOT NULL,\n CONSTRAINT data_area_pkey PRIMARY KEY (data_id , area_id ),\n CONSTRAINT data_area_area_id_fk FOREIGN KEY (area_id)\n REFERENCES area (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT data_area_data_id_fk FOREIGN KEY (data_id)\n REFERENCES data (id) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE CASCADE\n);\n\nHere is the query I'm running and the result of its explain can be found\nhere http://explain.depesz.com/s/1yu\n\nSELECT * FROM data d JOIN data_area da ON da.data_id = d.id LEFT JOIN area\na ON da.area_id = a.id WHERE d.datasetid IN\n(5634,5635,5636,5637,5638,5639,5640,5641,5642) AND da.area_id IN\n(1, 2, 3 .... 9999) AND (readingdatetime BETWEEN '1990-01-01' AND\n'2013-01-01') AND depth BETWEEN 0 AND 99999;\n\nIf you look at the explain the index scan is taking 97% of the time is\nspent on the index scan for the JOIN of data_area.\n\nHardware\n\n - CPU: Intel(R) Xeon(R) CPU E5420 ( 8 Cores )\n - RAM: 16GB\n\nConfig Changes\n\nI'm using the base Ubuntu config apart from the following changes\n\n - shared_buffers set to 2GB\n - work_mem set to 1GB\n - maintenance_work_men set to 512MB\n - effective_cache_size set to 8GB\n\nThink that covers everything hope this has enough detail for someone to be\nable to help if there is anything I've missed please let me know and I'll\nadd any more info needed. Any input on the optimisation of the table\nstructure, the query, or anything else I can do to sort this issue would be\nmost appreciated.\n\nThanks in advance,\n\nMark Davidson\n\nHi All, Hoping someone can help me out with some performance issues I'm having with the INDEX on my database. I've got a database that has a data table containing ~55,000,000 rows which have point data and an area table containing ~3,500 rows which have polygon data. A user queries the data by selecting what areas they want to view and using some other filters such as datatime and what datasets they want to query. This all works fine and previously the intersect of the data rows to the areas was being done on the fly with PostGIS ST_Intersects. However as the data table grow we decided it would make sense to offload the data processing and not calculate the intersect for a row on the fly each time, but to pre-calculate it and store the result in the join table. Resultantly this produce a table data_area which contains ~250,000,000 rows. This simply has two columns which show the intersect between data and area. We where expecting that this would give a significant performance improvement to query time, but the query seems to take a very long time to analyse the INDEX as part of the query. I'm thinking there must be something wrong with my setup or the query its self as I'm sure postgres will perform better. \nI've tried restructuring the query, changing config settings and doing maintenance like VACUUM but nothing has helped. Hope that introduction is clear enough and makes sense if anything is unclear please let me know.\nI'm using PostgreSQL 9.1.4 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5.1) 4.4.3, 64-bit on Ubuntu 12.04 which was installed using apt. Here is the structure of my database tables \nCREATE TABLE data( id bigserial NOT NULL, datasetid integer NOT NULL, readingdatetime timestamp without time zone NOT NULL, depth double precision NOT NULL, readingdatetime2 timestamp without time zone,\n depth2 double precision, value double precision NOT NULL, uploaddatetime timestamp without time zone, description character varying(255), point geometry, point2 geometry, CONSTRAINT \"DATAPRIMARYKEY\" PRIMARY KEY (id ),\n CONSTRAINT enforce_dims_point CHECK (st_ndims(point) = 2), CONSTRAINT enforce_dims_point2 CHECK (st_ndims(point2) = 2), CONSTRAINT enforce_geotype_point CHECK (geometrytype(point) = 'POINT'::text OR point IS NULL),\n CONSTRAINT enforce_geotype_point2 CHECK (geometrytype(point2) = 'POINT'::text OR point2 IS NULL), CONSTRAINT enforce_srid_point CHECK (st_srid(point) = 4326), CONSTRAINT enforce_srid_point2 CHECK (st_srid(point2) = 4326)\n);CREATE INDEX data_datasetid_index ON data USING btree (datasetid );CREATE INDEX data_point_index ON data USING gist (point );CREATE INDEX \"data_readingDatetime_index\" ON data USING btree (readingdatetime );\nALTER TABLE data CLUSTER ON \"data_readingDatetime_index\";CREATE TABLE area( id serial NOT NULL, \"areaCode\" character varying(10) NOT NULL, country character varying(250) NOT NULL,\n \"polysetID\" integer NOT NULL, polygon geometry, CONSTRAINT area_primary_key PRIMARY KEY (id ), CONSTRAINT polyset_foreign_key FOREIGN KEY (\"polysetID\") REFERENCES polyset (id) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE CASCADE, CONSTRAINT enforce_dims_area CHECK (st_ndims(polygon) = 2), CONSTRAINT enforce_geotype_area CHECK (geometrytype(polygon) = 'POLYGON'::text OR polygon IS NULL),\n CONSTRAINT enforce_srid_area CHECK (st_srid(polygon) = 4326));CREATE INDEX area_polygon_index ON area USING gist (polygon );CREATE INDEX \"area_polysetID_index\" ON area USING btree (\"polysetID\" );\nALTER TABLE area CLUSTER ON \"area_polysetID_index\";CREATE TABLE data_area( data_id integer NOT NULL, area_id integer NOT NULL, CONSTRAINT data_area_pkey PRIMARY KEY (data_id , area_id ),\n CONSTRAINT data_area_area_id_fk FOREIGN KEY (area_id) REFERENCES area (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT data_area_data_id_fk FOREIGN KEY (data_id) REFERENCES data (id) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE CASCADE);Here is the query I'm running and the result of its explain can be found here http://explain.depesz.com/s/1yu\nSELECT * FROM data d JOIN data_area da ON da.data_id = d.id LEFT JOIN area a ON da.area_id = a.id WHERE d.datasetid IN (5634,5635,5636,5637,5638,5639,5640,5641,5642) AND da.area_id IN\n(1, 2, 3 .... 9999) AND (readingdatetime BETWEEN '1990-01-01' AND '2013-01-01') AND depth BETWEEN 0 AND 99999;If you look at the explain the index scan is taking 97% of the time is spent on the index scan for the JOIN of data_area. \nHardware - CPU: Intel(R) Xeon(R) CPU E5420 ( 8 Cores ) - RAM: 16GBConfig ChangesI'm using the base Ubuntu config apart from the following changes\n - shared_buffers set to 2GB - work_mem set to 1GB - maintenance_work_men set to 512MB - effective_cache_size set to 8GBThink that covers everything hope this has enough detail for someone to be able to help if there is anything I've missed please let me know and I'll add any more info needed. Any input on the optimisation of the table structure, the query, or anything else I can do to sort this issue would be most appreciated. \nThanks in advance, Mark Davidson",
"msg_date": "Fri, 5 Apr 2013 16:51:37 +0100",
"msg_from": "Mark Davidson <[email protected]>",
"msg_from_op": true,
"msg_subject": "INDEX Performance Issue"
},
{
"msg_contents": "Mark Davidson <[email protected]> wrote:\n\n> CONSTRAINT data_area_pkey PRIMARY KEY (data_id , area_id ),\n\nSo the only index on this 250 million row table starts with the ID\nof the point, but you are joining to it by the ID of the area.\nThat's requires a sequential scan of all 250 million rows. Switch\nthe order of the columns in the primary key, add a unique index\nwith the columns switched, or add an index on just the area ID.\n\nPerhaps you thought that the foreign key constraints would create\nindexes? (They don't.)\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 5 Apr 2013 09:37:26 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "Hi Kevin\n\nThanks for your response. I tried doing what you suggested so that table\nnow has a primary key of ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id ,\ndata_id ); ' and I've added the INDEX of 'CREATE INDEX\ndata_area_data_id_index ON data_area USING btree (data_id );' unfortunately\nit hasn't resulted in an improvement of the query performance. Here is the\nexplain http://explain.depesz.com/s/tDL I think there is no performance\nincrease because its now not using primary key and just using the index on\nthe data_id. Have I done what you suggested correctly? Any other\nsuggestions?\n\nThanks very much for your help,\n\nMark\n\n\n\nOn 5 April 2013 17:37, Kevin Grittner <[email protected]> wrote:\n\n> Mark Davidson <[email protected]> wrote:\n>\n> > CONSTRAINT data_area_pkey PRIMARY KEY (data_id , area_id ),\n>\n> So the only index on this 250 million row table starts with the ID\n> of the point, but you are joining to it by the ID of the area.\n> That's requires a sequential scan of all 250 million rows. Switch\n> the order of the columns in the primary key, add a unique index\n> with the columns switched, or add an index on just the area ID.\n>\n> Perhaps you thought that the foreign key constraints would create\n> indexes? (They don't.)\n>\n> --\n> Kevin Grittner\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi KevinThanks for your response. I tried doing what you suggested so that table now has a primary key of ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); ' and I've added the INDEX of 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );' unfortunately it hasn't resulted in an improvement of the query performance. Here is the explain http://explain.depesz.com/s/tDL I think there is no performance increase because its now not using primary key and just using the index on the data_id. Have I done what you suggested correctly? Any other suggestions?\nThanks very much for your help, MarkOn 5 April 2013 17:37, Kevin Grittner <[email protected]> wrote:\nMark Davidson <[email protected]> wrote:\n\n> CONSTRAINT data_area_pkey PRIMARY KEY (data_id , area_id ),\n\nSo the only index on this 250 million row table starts with the ID\nof the point, but you are joining to it by the ID of the area.\nThat's requires a sequential scan of all 250 million rows. Switch\nthe order of the columns in the primary key, add a unique index\nwith the columns switched, or add an index on just the area ID.\n\nPerhaps you thought that the foreign key constraints would create\nindexes? (They don't.)\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 7 Apr 2013 11:03:28 +0100",
"msg_from": "Mark Davidson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "> Thanks for your response. I tried doing what you suggested so that table now has a primary key of ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); ' and I've added the INDEX > of 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );' unfortunately it hasn't resulted in an improvement of the query performance. Here is the explain \n\n> ...\n\nDid you run analyze on the table after creating the index ?\n\nGW\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 7 Apr 2013 03:21:20 -0700 (PDT)",
"msg_from": "Greg Williamson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "Greg Williamson <[email protected]> wrote:\n\n>> Thanks for your response. I tried doing what you suggested so\n>> that table now has a primary key of\n>> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>> and I've added the INDEX of\n>> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );'\n\nYeah, that is what I was suggesting.\n\n>> unfortunately it hasn't resulted in an improvement of the query\n>> performance.\n\n> Did you run analyze on the table after creating the index ?\n\nThat probably isn't necessary. Statistics are normally on relations\nand columns; there are only certain special cases where an ANALYZE\nis needed after an index build, like if the index is on an\nexpression rather than a list of columns.\n\nMark, what happens if you change that left join to a normal (inner)\njoin? Since you're doing an inner join to data_area and that has a\nforeign key to area, there should always be a match anyway, right?\nThe optimizer doesn't recognize that, so it can't start from the\narea and just match to the appropriate points.\n\n-- \nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 7 Apr 2013 08:15:42 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "Takes a little longer with the INNER join unfortunately. Takes about ~3.5\nminutes, here is the query plan http://explain.depesz.com/s/EgBl.\n\nWith the JOIN there might not be a match if the data does not fall within\none of the areas that is selected in the IN query.\n\nSo if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but\nthe user might be querying areas ( 200 ... 500 ) so no match in area would\nbe found just to be absolutely clear.\n\nIs it worth considering adding additional statistics on any of the columns?\nAnd / Or additional INDEXES or different types INDEX? Would it be worth\nrestructuring the query starting with areas and working to join data to\nthat?\n\n\nOn 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n\n> Greg Williamson <[email protected]> wrote:\n>\n> >> Thanks for your response. I tried doing what you suggested so\n> >> that table now has a primary key of\n> >> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n> >> and I've added the INDEX of\n> >> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id\n> );'\n>\n> Yeah, that is what I was suggesting.\n>\n> >> unfortunately it hasn't resulted in an improvement of the query\n> >> performance.\n>\n> > Did you run analyze on the table after creating the index ?\n>\n> That probably isn't necessary. Statistics are normally on relations\n> and columns; there are only certain special cases where an ANALYZE\n> is needed after an index build, like if the index is on an\n> expression rather than a list of columns.\n>\n> Mark, what happens if you change that left join to a normal (inner)\n> join? Since you're doing an inner join to data_area and that has a\n> foreign key to area, there should always be a match anyway, right?\n> The optimizer doesn't recognize that, so it can't start from the\n> area and just match to the appropriate points.\n>\n> --\n> Kevin Grittner\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nTakes a little longer with the INNER join unfortunately. Takes about ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl. With the JOIN there might not be a match if the data does not fall within one of the areas that is selected in the IN query. \nSo if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but the user might be querying areas ( 200 ... 500 ) so no match in area would be found just to be absolutely clear. \nIs it worth considering adding additional statistics on any of the columns? And / Or additional INDEXES or different types INDEX? Would it be worth restructuring the query starting with areas and working to join data to that? \nOn 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\nGreg Williamson <[email protected]> wrote:\n\n>> Thanks for your response. I tried doing what you suggested so\n>> that table now has a primary key of\n>> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>> and I've added the INDEX of\n>> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );'\n\nYeah, that is what I was suggesting.\n\n>> unfortunately it hasn't resulted in an improvement of the query\n>> performance.\n\n> Did you run analyze on the table after creating the index ?\n\nThat probably isn't necessary. Statistics are normally on relations\nand columns; there are only certain special cases where an ANALYZE\nis needed after an index build, like if the index is on an\nexpression rather than a list of columns.\n\nMark, what happens if you change that left join to a normal (inner)\njoin? Since you're doing an inner join to data_area and that has a\nforeign key to area, there should always be a match anyway, right?\nThe optimizer doesn't recognize that, so it can't start from the\narea and just match to the appropriate points.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 7 Apr 2013 23:22:32 +0100",
"msg_from": "Mark Davidson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "Been trying to progress with this today. Decided to setup the database on\nmy local machine to try a few things and I'm getting much more sensible\nresults and a totally different query plan\nhttp://explain.depesz.com/s/KGdin this case the query took about a\nminute but does sometimes take around\n80 seconds.\n\nThe config is exactly the same between the two database. The databases them\nselves are identical with all indexes the same on the tables.\n\nThe server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM and\nthe database is just on a SATA HDD which is a Western Digital WD5000AAKS.\nMy desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the\ndatabase is running on a SATA HDD which is a Western Digital WD1002FAEX-0\n\nCould anyone offer any reasoning as to why the plan would be so different\nacross the two machines? I would have thought that the server would perform\na lot better since it has more cores or is postgres more affected by the\nCPU speed? Could anyone suggest a way to bench mark the machines for their\npostgres performance?\n\nThanks again for everyones input,\n\nMark\n\n\nOn 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\n\n> Takes a little longer with the INNER join unfortunately. Takes about ~3.5\n> minutes, here is the query plan http://explain.depesz.com/s/EgBl.\n>\n> With the JOIN there might not be a match if the data does not fall within\n> one of the areas that is selected in the IN query.\n>\n> So if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but\n> the user might be querying areas ( 200 ... 500 ) so no match in area would\n> be found just to be absolutely clear.\n>\n> Is it worth considering adding additional statistics on any of the\n> columns? And / Or additional INDEXES or different types INDEX? Would it be\n> worth restructuring the query starting with areas and working to join data\n> to that?\n>\n>\n> On 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n>\n>> Greg Williamson <[email protected]> wrote:\n>>\n>> >> Thanks for your response. I tried doing what you suggested so\n>> >> that table now has a primary key of\n>> >> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>> >> and I've added the INDEX of\n>> >> 'CREATE INDEX data_area_data_id_index ON data_area USING btree\n>> (data_id );'\n>>\n>> Yeah, that is what I was suggesting.\n>>\n>> >> unfortunately it hasn't resulted in an improvement of the query\n>> >> performance.\n>>\n>> > Did you run analyze on the table after creating the index ?\n>>\n>> That probably isn't necessary. Statistics are normally on relations\n>> and columns; there are only certain special cases where an ANALYZE\n>> is needed after an index build, like if the index is on an\n>> expression rather than a list of columns.\n>>\n>> Mark, what happens if you change that left join to a normal (inner)\n>> join? Since you're doing an inner join to data_area and that has a\n>> foreign key to area, there should always be a match anyway, right?\n>> The optimizer doesn't recognize that, so it can't start from the\n>> area and just match to the appropriate points.\n>>\n>> --\n>> Kevin Grittner\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>\n>\n\nBeen trying to progress with this today. Decided to setup the database on my local machine to try a few things and I'm getting much more sensible results and a totally different query plan http://explain.depesz.com/s/KGd in this case the query took about a minute but does sometimes take around 80 seconds. \nThe config is exactly the same between the two database. The databases them selves are identical with all indexes the same on the tables. The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM and the database is just on a SATA HDD which is a Western Digital WD5000AAKS.\nMy desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the database is running on a SATA HDD which is a Western Digital WD1002FAEX-0Could anyone offer any reasoning as to why the plan would be so different across the two machines? I would have thought that the server would perform a lot better since it has more cores or is postgres more affected by the CPU speed? Could anyone suggest a way to bench mark the machines for their postgres performance?\nThanks again for everyones input,Mark\nOn 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\nTakes a little longer with the INNER join unfortunately. Takes about ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl. \nWith the JOIN there might not be a match if the data does not fall within one of the areas that is selected in the IN query. \nSo if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but the user might be querying areas ( 200 ... 500 ) so no match in area would be found just to be absolutely clear. \n\nIs it worth considering adding additional statistics on any of the columns? And / Or additional INDEXES or different types INDEX? Would it be worth restructuring the query starting with areas and working to join data to that? \nOn 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n\nGreg Williamson <[email protected]> wrote:\n\n>> Thanks for your response. I tried doing what you suggested so\n>> that table now has a primary key of\n>> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>> and I've added the INDEX of\n>> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );'\n\nYeah, that is what I was suggesting.\n\n>> unfortunately it hasn't resulted in an improvement of the query\n>> performance.\n\n> Did you run analyze on the table after creating the index ?\n\nThat probably isn't necessary. Statistics are normally on relations\nand columns; there are only certain special cases where an ANALYZE\nis needed after an index build, like if the index is on an\nexpression rather than a list of columns.\n\nMark, what happens if you change that left join to a normal (inner)\njoin? Since you're doing an inner join to data_area and that has a\nforeign key to area, there should always be a match anyway, right?\nThe optimizer doesn't recognize that, so it can't start from the\narea and just match to the appropriate points.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 8 Apr 2013 18:02:59 +0100",
"msg_from": "Mark Davidson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "Hello Mark,\nPostgreSQL currently doesn't support parallel query so a faster cpu even if\nit has less cores would be faster for a single query, about benchmarking\nyou can try pgbench that you will find in the contrib,\nthe execution plan may be different because of different statistics, have\nyou analyzed both databases when you compared the execution plans ?\n\nVasilis Ventirozos\n\n\nBeen trying to progress with this today. Decided to setup the database on\n> my local machine to try a few things and I'm getting much more sensible\n> results and a totally different query plan http://explain.depesz.com/s/KGdin this case the query took about a minute but does sometimes take around\n> 80 seconds.\n>\n> The config is exactly the same between the two database. The databases\n> them selves are identical with all indexes the same on the tables.\n>\n> The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM\n> and the database is just on a SATA HDD which is a Western Digital\n> WD5000AAKS.\n> My desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the\n> database is running on a SATA HDD which is a Western Digital WD1002FAEX-0\n>\n> Could anyone offer any reasoning as to why the plan would be so different\n> across the two machines? I would have thought that the server would perform\n> a lot better since it has more cores or is postgres more affected by the\n> CPU speed? Could anyone suggest a way to bench mark the machines for their\n> postgres performance?\n>\n> Thanks again for everyones input,\n>\n> Mark\n>\n>\n> On 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\n>\n>> Takes a little longer with the INNER join unfortunately. Takes about ~3.5\n>> minutes, here is the query plan http://explain.depesz.com/s/EgBl.\n>>\n>> With the JOIN there might not be a match if the data does not fall within\n>> one of the areas that is selected in the IN query.\n>>\n>> So if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but\n>> the user might be querying areas ( 200 ... 500 ) so no match in area would\n>> be found just to be absolutely clear.\n>>\n>> Is it worth considering adding additional statistics on any of the\n>> columns? And / Or additional INDEXES or different types INDEX? Would it be\n>> worth restructuring the query starting with areas and working to join data\n>> to that?\n>>\n>>\n>> On 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n>>\n>>> Greg Williamson <[email protected]> wrote:\n>>>\n>>> >> Thanks for your response. I tried doing what you suggested so\n>>> >> that table now has a primary key of\n>>> >> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>>> >> and I've added the INDEX of\n>>> >> 'CREATE INDEX data_area_data_id_index ON data_area USING btree\n>>> (data_id );'\n>>>\n>>> Yeah, that is what I was suggesting.\n>>>\n>>> >> unfortunately it hasn't resulted in an improvement of the query\n>>> >> performance.\n>>>\n>>> > Did you run analyze on the table after creating the index ?\n>>>\n>>> That probably isn't necessary. Statistics are normally on relations\n>>> and columns; there are only certain special cases where an ANALYZE\n>>> is needed after an index build, like if the index is on an\n>>> expression rather than a list of columns.\n>>>\n>>> Mark, what happens if you change that left join to a normal (inner)\n>>> join? Since you're doing an inner join to data_area and that has a\n>>> foreign key to area, there should always be a match anyway, right?\n>>> The optimizer doesn't recognize that, so it can't start from the\n>>> area and just match to the appropriate points.\n>>>\n>>> --\n>>> Kevin Grittner\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>> The Enterprise PostgreSQL Company\n>>>\n>>\n>>\n>\n\nHello Mark,PostgreSQL currently doesn't support parallel query so a faster cpu even if it has less cores would be faster for a single query, about benchmarking you can try pgbench that you will find in the contrib,\nthe execution plan may be different because of different statistics, have you analyzed both databases when you compared the execution plans ?\nVasilis Ventirozos\nBeen trying to progress with this today. Decided to setup the database on my local machine to try a few things and I'm getting much more sensible results and a totally different query plan http://explain.depesz.com/s/KGd in this case the query took about a minute but does sometimes take around 80 seconds. \nThe config is exactly the same between the two database. The databases them selves are identical with all indexes the same on the tables. The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM and the database is just on a SATA HDD which is a Western Digital WD5000AAKS.\nMy desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the database is running on a SATA HDD which is a Western Digital WD1002FAEX-0Could anyone offer any reasoning as to why the plan would be so different across the two machines? I would have thought that the server would perform a lot better since it has more cores or is postgres more affected by the CPU speed? Could anyone suggest a way to bench mark the machines for their postgres performance?\nThanks again for everyones input,Mark\nOn 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\nTakes a little longer with the INNER join unfortunately. Takes about ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl. \nWith the JOIN there might not be a match if the data does not fall within one of the areas that is selected in the IN query. \nSo if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but the user might be querying areas ( 200 ... 500 ) so no match in area would be found just to be absolutely clear. \n\nIs it worth considering adding additional statistics on any of the columns? And / Or additional INDEXES or different types INDEX? Would it be worth restructuring the query starting with areas and working to join data to that? \nOn 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n\nGreg Williamson <[email protected]> wrote:\n\n>> Thanks for your response. I tried doing what you suggested so\n>> that table now has a primary key of\n>> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>> and I've added the INDEX of\n>> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );'\n\nYeah, that is what I was suggesting.\n\n>> unfortunately it hasn't resulted in an improvement of the query\n>> performance.\n\n> Did you run analyze on the table after creating the index ?\n\nThat probably isn't necessary. Statistics are normally on relations\nand columns; there are only certain special cases where an ANALYZE\nis needed after an index build, like if the index is on an\nexpression rather than a list of columns.\n\nMark, what happens if you change that left join to a normal (inner)\njoin? Since you're doing an inner join to data_area and that has a\nforeign key to area, there should always be a match anyway, right?\nThe optimizer doesn't recognize that, so it can't start from the\narea and just match to the appropriate points.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 8 Apr 2013 20:19:21 +0300",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "Thanks for your response Vasillis. I've run pgbench on both machines\n`./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local\nmachine and 23.825332 tps on the server so quite a significant difference.\nCould this purely be down to the CPU clock speed or is it likely something\nelse causing the issue?\nI have run ANALYZE on both databases and tried the queries a number of\ntimes on each to make sure the results are consistent, this is the case.\n\n\nOn 8 April 2013 18:19, Vasilis Ventirozos <[email protected]> wrote:\n\n>\n> Hello Mark,\n> PostgreSQL currently doesn't support parallel query so a faster cpu even\n> if it has less cores would be faster for a single query, about benchmarking\n> you can try pgbench that you will find in the contrib,\n> the execution plan may be different because of different statistics, have\n> you analyzed both databases when you compared the execution plans ?\n>\n> Vasilis Ventirozos\n>\n>\n> Been trying to progress with this today. Decided to setup the database on\n>> my local machine to try a few things and I'm getting much more sensible\n>> results and a totally different query plan\n>> http://explain.depesz.com/s/KGd in this case the query took about a\n>> minute but does sometimes take around 80 seconds.\n>>\n>> The config is exactly the same between the two database. The databases\n>> them selves are identical with all indexes the same on the tables.\n>>\n>> The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM\n>> and the database is just on a SATA HDD which is a Western Digital\n>> WD5000AAKS.\n>> My desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the\n>> database is running on a SATA HDD which is a Western Digital WD1002FAEX-0\n>>\n>> Could anyone offer any reasoning as to why the plan would be so different\n>> across the two machines? I would have thought that the server would perform\n>> a lot better since it has more cores or is postgres more affected by the\n>> CPU speed? Could anyone suggest a way to bench mark the machines for their\n>> postgres performance?\n>>\n>> Thanks again for everyones input,\n>>\n>> Mark\n>>\n>>\n>> On 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\n>>\n>>> Takes a little longer with the INNER join unfortunately. Takes about\n>>> ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl.\n>>>\n>>> With the JOIN there might not be a match if the data does not fall\n>>> within one of the areas that is selected in the IN query.\n>>>\n>>> So if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but\n>>> the user might be querying areas ( 200 ... 500 ) so no match in area would\n>>> be found just to be absolutely clear.\n>>>\n>>> Is it worth considering adding additional statistics on any of the\n>>> columns? And / Or additional INDEXES or different types INDEX? Would it be\n>>> worth restructuring the query starting with areas and working to join data\n>>> to that?\n>>>\n>>>\n>>> On 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n>>>\n>>>> Greg Williamson <[email protected]> wrote:\n>>>>\n>>>> >> Thanks for your response. I tried doing what you suggested so\n>>>> >> that table now has a primary key of\n>>>> >> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>>>> >> and I've added the INDEX of\n>>>> >> 'CREATE INDEX data_area_data_id_index ON data_area USING btree\n>>>> (data_id );'\n>>>>\n>>>> Yeah, that is what I was suggesting.\n>>>>\n>>>> >> unfortunately it hasn't resulted in an improvement of the query\n>>>> >> performance.\n>>>>\n>>>> > Did you run analyze on the table after creating the index ?\n>>>>\n>>>> That probably isn't necessary. Statistics are normally on relations\n>>>> and columns; there are only certain special cases where an ANALYZE\n>>>> is needed after an index build, like if the index is on an\n>>>> expression rather than a list of columns.\n>>>>\n>>>> Mark, what happens if you change that left join to a normal (inner)\n>>>> join? Since you're doing an inner join to data_area and that has a\n>>>> foreign key to area, there should always be a match anyway, right?\n>>>> The optimizer doesn't recognize that, so it can't start from the\n>>>> area and just match to the appropriate points.\n>>>>\n>>>> --\n>>>> Kevin Grittner\n>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>> The Enterprise PostgreSQL Company\n>>>>\n>>>\n>>>\n>>\n>\n\nThanks for your response Vasillis. I've run pgbench on both machines `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local machine and 23.825332 tps on the server so quite a significant difference. \nCould this purely be down to the CPU clock speed or is it likely something else causing the issue?I have run ANALYZE on both databases and tried the queries a number of times on each to make sure the results are consistent, this is the case. \nOn 8 April 2013 18:19, Vasilis Ventirozos <[email protected]> wrote:\nHello Mark,PostgreSQL currently doesn't support parallel query so a faster cpu even if it has less cores would be faster for a single query, about benchmarking you can try pgbench that you will find in the contrib,\nthe execution plan may be different because of different statistics, have you analyzed both databases when you compared the execution plans ?\n\nVasilis Ventirozos\n\nBeen trying to progress with this today. Decided to setup the database on my local machine to try a few things and I'm getting much more sensible results and a totally different query plan http://explain.depesz.com/s/KGd in this case the query took about a minute but does sometimes take around 80 seconds. \nThe config is exactly the same between the two database. The databases them selves are identical with all indexes the same on the tables. The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM and the database is just on a SATA HDD which is a Western Digital WD5000AAKS.\nMy desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the database is running on a SATA HDD which is a Western Digital WD1002FAEX-0Could anyone offer any reasoning as to why the plan would be so different across the two machines? I would have thought that the server would perform a lot better since it has more cores or is postgres more affected by the CPU speed? Could anyone suggest a way to bench mark the machines for their postgres performance?\nThanks again for everyones input,Mark\nOn 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\nTakes a little longer with the INNER join unfortunately. Takes about ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl. \nWith the JOIN there might not be a match if the data does not fall within one of the areas that is selected in the IN query. \nSo if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but the user might be querying areas ( 200 ... 500 ) so no match in area would be found just to be absolutely clear. \n\nIs it worth considering adding additional statistics on any of the columns? And / Or additional INDEXES or different types INDEX? Would it be worth restructuring the query starting with areas and working to join data to that? \nOn 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n\nGreg Williamson <[email protected]> wrote:\n\n>> Thanks for your response. I tried doing what you suggested so\n>> that table now has a primary key of\n>> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>> and I've added the INDEX of\n>> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );'\n\nYeah, that is what I was suggesting.\n\n>> unfortunately it hasn't resulted in an improvement of the query\n>> performance.\n\n> Did you run analyze on the table after creating the index ?\n\nThat probably isn't necessary. Statistics are normally on relations\nand columns; there are only certain special cases where an ANALYZE\nis needed after an index build, like if the index is on an\nexpression rather than a list of columns.\n\nMark, what happens if you change that left join to a normal (inner)\njoin? Since you're doing an inner join to data_area and that has a\nforeign key to area, there should always be a match anyway, right?\nThe optimizer doesn't recognize that, so it can't start from the\narea and just match to the appropriate points.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 8 Apr 2013 20:31:02 +0100",
"msg_from": "Mark Davidson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "-c 10 means 10 clients so that should take advantage of all your cores (see\nbellow)\n\n%Cpu0 : 39.3 us, 21.1 sy, 0.0 ni, 38.7 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st\n%Cpu1 : 38.0 us, 25.0 sy, 0.0 ni, 26.0 id, 4.2 wa, 0.0 hi, 6.8 si, 0.0 st\n%Cpu2 : 39.3 us, 20.4 sy, 0.0 ni, 39.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n%Cpu3 : 40.0 us, 18.7 sy, 0.0 ni, 40.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n%Cpu4 : 13.9 us, 7.1 sy, 0.0 ni, 79.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n%Cpu5 : 13.1 us, 8.4 sy, 0.0 ni, 78.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n%Cpu6 : 14.8 us, 6.4 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n%Cpu7 : 15.7 us, 6.7 sy, 0.0 ni, 77.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n\n i am pasting you the results of the same test on a i7-2600 16gb with a\nsata3 SSD and the results from a VM with 2 cores and a normal 7200 rpm hdd\n\n-- DESKTOP\nvasilis@Disorder ~ $ pgbench -c 10 -t 10000 bench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 1\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 1713.338111 (including connections establishing)\ntps = 1713.948478 (excluding connections establishing)\n\n-- VM\n\npostgres@pglab1:~/postgresql-9.2.4/contrib/pgbench$ ./pgbench -c 10 -t\n10000 bench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 1\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 1118.976496 (including connections establishing)\ntps = 1119.180126 (excluding connections establishing)\n\ni am assuming that you didn't populate your pgbench db with the default\nvalues , if you tell me how you did i will be happy to re run the test and\nsee the differences.\n\n\n\nOn Mon, Apr 8, 2013 at 10:31 PM, Mark Davidson <[email protected]> wrote:\n\n> Thanks for your response Vasillis. I've run pgbench on both machines\n> `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local\n> machine and 23.825332 tps on the server so quite a significant difference.\n> Could this purely be down to the CPU clock speed or is it likely something\n> else causing the issue?\n> I have run ANALYZE on both databases and tried the queries a number of\n> times on each to make sure the results are consistent, this is the case.\n>\n>\n> On 8 April 2013 18:19, Vasilis Ventirozos <[email protected]> wrote:\n>\n>>\n>> Hello Mark,\n>> PostgreSQL currently doesn't support parallel query so a faster cpu even\n>> if it has less cores would be faster for a single query, about benchmarking\n>> you can try pgbench that you will find in the contrib,\n>> the execution plan may be different because of different statistics, have\n>> you analyzed both databases when you compared the execution plans ?\n>>\n>> Vasilis Ventirozos\n>>\n>>\n>> Been trying to progress with this today. Decided to setup the database\n>>> on my local machine to try a few things and I'm getting much more sensible\n>>> results and a totally different query plan\n>>> http://explain.depesz.com/s/KGd in this case the query took about a\n>>> minute but does sometimes take around 80 seconds.\n>>>\n>>> The config is exactly the same between the two database. The databases\n>>> them selves are identical with all indexes the same on the tables.\n>>>\n>>> The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM\n>>> and the database is just on a SATA HDD which is a Western Digital\n>>> WD5000AAKS.\n>>> My desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the\n>>> database is running on a SATA HDD which is a Western Digital WD1002FAEX-0\n>>>\n>>> Could anyone offer any reasoning as to why the plan would be so\n>>> different across the two machines? I would have thought that the server\n>>> would perform a lot better since it has more cores or is postgres more\n>>> affected by the CPU speed? Could anyone suggest a way to bench mark the\n>>> machines for their postgres performance?\n>>>\n>>> Thanks again for everyones input,\n>>>\n>>> Mark\n>>>\n>>>\n>>> On 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\n>>>\n>>>> Takes a little longer with the INNER join unfortunately. Takes about\n>>>> ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl.\n>>>>\n>>>> With the JOIN there might not be a match if the data does not fall\n>>>> within one of the areas that is selected in the IN query.\n>>>>\n>>>> So if we have data id (10) that might fall in areas ( 1, 5, 8, 167 )\n>>>> but the user might be querying areas ( 200 ... 500 ) so no match in area\n>>>> would be found just to be absolutely clear.\n>>>>\n>>>> Is it worth considering adding additional statistics on any of the\n>>>> columns? And / Or additional INDEXES or different types INDEX? Would it be\n>>>> worth restructuring the query starting with areas and working to join data\n>>>> to that?\n>>>>\n>>>>\n>>>> On 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n>>>>\n>>>>> Greg Williamson <[email protected]> wrote:\n>>>>>\n>>>>> >> Thanks for your response. I tried doing what you suggested so\n>>>>> >> that table now has a primary key of\n>>>>> >> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>>>>> >> and I've added the INDEX of\n>>>>> >> 'CREATE INDEX data_area_data_id_index ON data_area USING btree\n>>>>> (data_id );'\n>>>>>\n>>>>> Yeah, that is what I was suggesting.\n>>>>>\n>>>>> >> unfortunately it hasn't resulted in an improvement of the query\n>>>>> >> performance.\n>>>>>\n>>>>> > Did you run analyze on the table after creating the index ?\n>>>>>\n>>>>> That probably isn't necessary. Statistics are normally on relations\n>>>>> and columns; there are only certain special cases where an ANALYZE\n>>>>> is needed after an index build, like if the index is on an\n>>>>> expression rather than a list of columns.\n>>>>>\n>>>>> Mark, what happens if you change that left join to a normal (inner)\n>>>>> join? Since you're doing an inner join to data_area and that has a\n>>>>> foreign key to area, there should always be a match anyway, right?\n>>>>> The optimizer doesn't recognize that, so it can't start from the\n>>>>> area and just match to the appropriate points.\n>>>>>\n>>>>> --\n>>>>> Kevin Grittner\n>>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>> The Enterprise PostgreSQL Company\n>>>>>\n>>>>\n>>>>\n>>>\n>>\n>\n\n-c 10 means 10 clients so that should take advantage of all your cores (see bellow)%Cpu0 : 39.3 us, 21.1 sy, 0.0 ni, 38.7 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st\n%Cpu1 : 38.0 us, 25.0 sy, 0.0 ni, 26.0 id, 4.2 wa, 0.0 hi, 6.8 si, 0.0 st%Cpu2 : 39.3 us, 20.4 sy, 0.0 ni, 39.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu3 : 40.0 us, 18.7 sy, 0.0 ni, 40.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n%Cpu4 : 13.9 us, 7.1 sy, 0.0 ni, 79.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu5 : 13.1 us, 8.4 sy, 0.0 ni, 78.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu6 : 14.8 us, 6.4 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n%Cpu7 : 15.7 us, 6.7 sy, 0.0 ni, 77.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st i am pasting you the results of the same test on a i7-2600 16gb with a sata3 SSD and the results from a VM with 2 cores and a normal 7200 rpm hdd\n-- DESKTOPvasilis@Disorder ~ $ pgbench -c 10 -t 10000 benchstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 1query mode: simplenumber of clients: 10\nnumber of threads: 1number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 1713.338111 (including connections establishing)tps = 1713.948478 (excluding connections establishing)\n-- VMpostgres@pglab1:~/postgresql-9.2.4/contrib/pgbench$ ./pgbench -c 10 -t 10000 benchstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 1\nquery mode: simplenumber of clients: 10number of threads: 1number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 1118.976496 (including connections establishing)\ntps = 1119.180126 (excluding connections establishing)i am assuming that you didn't populate your pgbench db with the default values , if you tell me how you did i will be happy to re run the test and see the differences.\nOn Mon, Apr 8, 2013 at 10:31 PM, Mark Davidson <[email protected]> wrote:\nThanks for your response Vasillis. I've run pgbench on both machines `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local machine and 23.825332 tps on the server so quite a significant difference. \nCould this purely be down to the CPU clock speed or is it likely something else causing the issue?I have run ANALYZE on both databases and tried the queries a number of times on each to make sure the results are consistent, this is the case. \nOn 8 April 2013 18:19, Vasilis Ventirozos <[email protected]> wrote:\nHello Mark,PostgreSQL currently doesn't support parallel query so a faster cpu even if it has less cores would be faster for a single query, about benchmarking you can try pgbench that you will find in the contrib,\nthe execution plan may be different because of different statistics, have you analyzed both databases when you compared the execution plans ?\n\nVasilis Ventirozos\n\nBeen trying to progress with this today. Decided to setup the database on my local machine to try a few things and I'm getting much more sensible results and a totally different query plan http://explain.depesz.com/s/KGd in this case the query took about a minute but does sometimes take around 80 seconds. \nThe config is exactly the same between the two database. The databases them selves are identical with all indexes the same on the tables. The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM and the database is just on a SATA HDD which is a Western Digital WD5000AAKS.\nMy desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the database is running on a SATA HDD which is a Western Digital WD1002FAEX-0Could anyone offer any reasoning as to why the plan would be so different across the two machines? I would have thought that the server would perform a lot better since it has more cores or is postgres more affected by the CPU speed? Could anyone suggest a way to bench mark the machines for their postgres performance?\nThanks again for everyones input,Mark\nOn 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\nTakes a little longer with the INNER join unfortunately. Takes about ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl. \nWith the JOIN there might not be a match if the data does not fall within one of the areas that is selected in the IN query. \nSo if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but the user might be querying areas ( 200 ... 500 ) so no match in area would be found just to be absolutely clear. \n\nIs it worth considering adding additional statistics on any of the columns? And / Or additional INDEXES or different types INDEX? Would it be worth restructuring the query starting with areas and working to join data to that? \nOn 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n\nGreg Williamson <[email protected]> wrote:\n\n>> Thanks for your response. I tried doing what you suggested so\n>> that table now has a primary key of\n>> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>> and I've added the INDEX of\n>> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );'\n\nYeah, that is what I was suggesting.\n\n>> unfortunately it hasn't resulted in an improvement of the query\n>> performance.\n\n> Did you run analyze on the table after creating the index ?\n\nThat probably isn't necessary. Statistics are normally on relations\nand columns; there are only certain special cases where an ANALYZE\nis needed after an index build, like if the index is on an\nexpression rather than a list of columns.\n\nMark, what happens if you change that left join to a normal (inner)\njoin? Since you're doing an inner join to data_area and that has a\nforeign key to area, there should always be a match anyway, right?\nThe optimizer doesn't recognize that, so it can't start from the\narea and just match to the appropriate points.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 8 Apr 2013 23:02:58 +0300",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "Wow my results are absolutely appalling compared to both of those which is\nreally interesting. Are you running postgres 9.2.4 on both instances? Any\nspecific configuration changes?\nThinking there must be something up with my setup to be getting such a low\ntps compared with you.\n\nOn 8 April 2013 21:02, Vasilis Ventirozos <[email protected]> wrote:\n\n>\n> -c 10 means 10 clients so that should take advantage of all your cores\n> (see bellow)\n>\n> %Cpu0 : 39.3 us, 21.1 sy, 0.0 ni, 38.7 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st\n> %Cpu1 : 38.0 us, 25.0 sy, 0.0 ni, 26.0 id, 4.2 wa, 0.0 hi, 6.8 si, 0.0 st\n> %Cpu2 : 39.3 us, 20.4 sy, 0.0 ni, 39.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n> %Cpu3 : 40.0 us, 18.7 sy, 0.0 ni, 40.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n> %Cpu4 : 13.9 us, 7.1 sy, 0.0 ni, 79.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n> %Cpu5 : 13.1 us, 8.4 sy, 0.0 ni, 78.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n> %Cpu6 : 14.8 us, 6.4 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n> %Cpu7 : 15.7 us, 6.7 sy, 0.0 ni, 77.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n>\n> i am pasting you the results of the same test on a i7-2600 16gb with a\n> sata3 SSD and the results from a VM with 2 cores and a normal 7200 rpm hdd\n>\n> -- DESKTOP\n> vasilis@Disorder ~ $ pgbench -c 10 -t 10000 bench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 10\n> number of threads: 1\n> number of transactions per client: 10000\n> number of transactions actually processed: 100000/100000\n> tps = 1713.338111 (including connections establishing)\n> tps = 1713.948478 (excluding connections establishing)\n>\n> -- VM\n>\n> postgres@pglab1:~/postgresql-9.2.4/contrib/pgbench$ ./pgbench -c 10 -t\n> 10000 bench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 10\n> number of threads: 1\n> number of transactions per client: 10000\n> number of transactions actually processed: 100000/100000\n> tps = 1118.976496 (including connections establishing)\n> tps = 1119.180126 (excluding connections establishing)\n>\n> i am assuming that you didn't populate your pgbench db with the default\n> values , if you tell me how you did i will be happy to re run the test and\n> see the differences.\n>\n>\n>\n> On Mon, Apr 8, 2013 at 10:31 PM, Mark Davidson <[email protected]> wrote:\n>\n>> Thanks for your response Vasillis. I've run pgbench on both machines\n>> `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local\n>> machine and 23.825332 tps on the server so quite a significant difference.\n>> Could this purely be down to the CPU clock speed or is it likely\n>> something else causing the issue?\n>> I have run ANALYZE on both databases and tried the queries a number of\n>> times on each to make sure the results are consistent, this is the case.\n>>\n>>\n>> On 8 April 2013 18:19, Vasilis Ventirozos <[email protected]> wrote:\n>>\n>>>\n>>> Hello Mark,\n>>> PostgreSQL currently doesn't support parallel query so a faster cpu even\n>>> if it has less cores would be faster for a single query, about benchmarking\n>>> you can try pgbench that you will find in the contrib,\n>>> the execution plan may be different because of different statistics,\n>>> have you analyzed both databases when you compared the execution plans ?\n>>>\n>>> Vasilis Ventirozos\n>>>\n>>>\n>>> Been trying to progress with this today. Decided to setup the database\n>>>> on my local machine to try a few things and I'm getting much more sensible\n>>>> results and a totally different query plan\n>>>> http://explain.depesz.com/s/KGd in this case the query took about a\n>>>> minute but does sometimes take around 80 seconds.\n>>>>\n>>>> The config is exactly the same between the two database. The databases\n>>>> them selves are identical with all indexes the same on the tables.\n>>>>\n>>>> The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM\n>>>> and the database is just on a SATA HDD which is a Western Digital\n>>>> WD5000AAKS.\n>>>> My desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the\n>>>> database is running on a SATA HDD which is a Western Digital WD1002FAEX-0\n>>>>\n>>>> Could anyone offer any reasoning as to why the plan would be so\n>>>> different across the two machines? I would have thought that the server\n>>>> would perform a lot better since it has more cores or is postgres more\n>>>> affected by the CPU speed? Could anyone suggest a way to bench mark the\n>>>> machines for their postgres performance?\n>>>>\n>>>> Thanks again for everyones input,\n>>>>\n>>>> Mark\n>>>>\n>>>>\n>>>> On 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\n>>>>\n>>>>> Takes a little longer with the INNER join unfortunately. Takes about\n>>>>> ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl.\n>>>>>\n>>>>>\n>>>>> With the JOIN there might not be a match if the data does not fall\n>>>>> within one of the areas that is selected in the IN query.\n>>>>>\n>>>>> So if we have data id (10) that might fall in areas ( 1, 5, 8, 167 )\n>>>>> but the user might be querying areas ( 200 ... 500 ) so no match in area\n>>>>> would be found just to be absolutely clear.\n>>>>>\n>>>>> Is it worth considering adding additional statistics on any of the\n>>>>> columns? And / Or additional INDEXES or different types INDEX? Would it be\n>>>>> worth restructuring the query starting with areas and working to join data\n>>>>> to that?\n>>>>>\n>>>>>\n>>>>> On 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n>>>>>\n>>>>>> Greg Williamson <[email protected]> wrote:\n>>>>>>\n>>>>>> >> Thanks for your response. I tried doing what you suggested so\n>>>>>> >> that table now has a primary key of\n>>>>>> >> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>>>>>> >> and I've added the INDEX of\n>>>>>> >> 'CREATE INDEX data_area_data_id_index ON data_area USING btree\n>>>>>> (data_id );'\n>>>>>>\n>>>>>> Yeah, that is what I was suggesting.\n>>>>>>\n>>>>>> >> unfortunately it hasn't resulted in an improvement of the query\n>>>>>> >> performance.\n>>>>>>\n>>>>>> > Did you run analyze on the table after creating the index ?\n>>>>>>\n>>>>>> That probably isn't necessary. Statistics are normally on relations\n>>>>>> and columns; there are only certain special cases where an ANALYZE\n>>>>>> is needed after an index build, like if the index is on an\n>>>>>> expression rather than a list of columns.\n>>>>>>\n>>>>>> Mark, what happens if you change that left join to a normal (inner)\n>>>>>> join? Since you're doing an inner join to data_area and that has a\n>>>>>> foreign key to area, there should always be a match anyway, right?\n>>>>>> The optimizer doesn't recognize that, so it can't start from the\n>>>>>> area and just match to the appropriate points.\n>>>>>>\n>>>>>> --\n>>>>>> Kevin Grittner\n>>>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>>> The Enterprise PostgreSQL Company\n>>>>>>\n>>>>>\n>>>>>\n>>>>\n>>>\n>>\n>\n\nWow my results are absolutely appalling compared to both of those which is really interesting. Are you running postgres 9.2.4 on both instances? Any specific configuration changes? Thinking there must be something up with my setup to be getting such a low tps compared with you. \nOn 8 April 2013 21:02, Vasilis Ventirozos <[email protected]> wrote:\n-c 10 means 10 clients so that should take advantage of all your cores (see bellow)%Cpu0 : 39.3 us, 21.1 sy, 0.0 ni, 38.7 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st\n\n%Cpu1 : 38.0 us, 25.0 sy, 0.0 ni, 26.0 id, 4.2 wa, 0.0 hi, 6.8 si, 0.0 st%Cpu2 : 39.3 us, 20.4 sy, 0.0 ni, 39.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu3 : 40.0 us, 18.7 sy, 0.0 ni, 40.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n\n%Cpu4 : 13.9 us, 7.1 sy, 0.0 ni, 79.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu5 : 13.1 us, 8.4 sy, 0.0 ni, 78.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu6 : 14.8 us, 6.4 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n\n%Cpu7 : 15.7 us, 6.7 sy, 0.0 ni, 77.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st i am pasting you the results of the same test on a i7-2600 16gb with a sata3 SSD and the results from a VM with 2 cores and a normal 7200 rpm hdd\n-- DESKTOPvasilis@Disorder ~ $ pgbench -c 10 -t 10000 benchstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 1query mode: simplenumber of clients: 10\n\nnumber of threads: 1number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 1713.338111 (including connections establishing)tps = 1713.948478 (excluding connections establishing)\n-- VMpostgres@pglab1:~/postgresql-9.2.4/contrib/pgbench$ ./pgbench -c 10 -t 10000 benchstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 1\n\nquery mode: simplenumber of clients: 10number of threads: 1number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 1118.976496 (including connections establishing)\n\ntps = 1119.180126 (excluding connections establishing)i am assuming that you didn't populate your pgbench db with the default values , if you tell me how you did i will be happy to re run the test and see the differences.\n\nOn Mon, Apr 8, 2013 at 10:31 PM, Mark Davidson <[email protected]> wrote:\nThanks for your response Vasillis. I've run pgbench on both machines `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local machine and 23.825332 tps on the server so quite a significant difference. \nCould this purely be down to the CPU clock speed or is it likely something else causing the issue?I have run ANALYZE on both databases and tried the queries a number of times on each to make sure the results are consistent, this is the case. \nOn 8 April 2013 18:19, Vasilis Ventirozos <[email protected]> wrote:\nHello Mark,PostgreSQL currently doesn't support parallel query so a faster cpu even if it has less cores would be faster for a single query, about benchmarking you can try pgbench that you will find in the contrib,\nthe execution plan may be different because of different statistics, have you analyzed both databases when you compared the execution plans ?\n\nVasilis Ventirozos\n\nBeen trying to progress with this today. Decided to setup the database on my local machine to try a few things and I'm getting much more sensible results and a totally different query plan http://explain.depesz.com/s/KGd in this case the query took about a minute but does sometimes take around 80 seconds. \nThe config is exactly the same between the two database. The databases them selves are identical with all indexes the same on the tables. The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM and the database is just on a SATA HDD which is a Western Digital WD5000AAKS.\nMy desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the database is running on a SATA HDD which is a Western Digital WD1002FAEX-0Could anyone offer any reasoning as to why the plan would be so different across the two machines? I would have thought that the server would perform a lot better since it has more cores or is postgres more affected by the CPU speed? Could anyone suggest a way to bench mark the machines for their postgres performance?\nThanks again for everyones input,Mark\nOn 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\nTakes a little longer with the INNER join unfortunately. Takes about ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl. \nWith the JOIN there might not be a match if the data does not fall within one of the areas that is selected in the IN query. \nSo if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but the user might be querying areas ( 200 ... 500 ) so no match in area would be found just to be absolutely clear. \n\nIs it worth considering adding additional statistics on any of the columns? And / Or additional INDEXES or different types INDEX? Would it be worth restructuring the query starting with areas and working to join data to that? \nOn 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n\nGreg Williamson <[email protected]> wrote:\n\n>> Thanks for your response. I tried doing what you suggested so\n>> that table now has a primary key of\n>> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>> and I've added the INDEX of\n>> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );'\n\nYeah, that is what I was suggesting.\n\n>> unfortunately it hasn't resulted in an improvement of the query\n>> performance.\n\n> Did you run analyze on the table after creating the index ?\n\nThat probably isn't necessary. Statistics are normally on relations\nand columns; there are only certain special cases where an ANALYZE\nis needed after an index build, like if the index is on an\nexpression rather than a list of columns.\n\nMark, what happens if you change that left join to a normal (inner)\njoin? Since you're doing an inner join to data_area and that has a\nforeign key to area, there should always be a match anyway, right?\nThe optimizer doesn't recognize that, so it can't start from the\narea and just match to the appropriate points.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 8 Apr 2013 21:18:52 +0100",
"msg_from": "Mark Davidson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "On Mon, Apr 8, 2013 at 11:18 PM, Mark Davidson <[email protected]> wrote:\n\n> Wow my results are absolutely appalling compared to both of those which is\n> really interesting. Are you running postgres 9.2.4 on both instances? Any\n> specific configuration changes?\n> Thinking there must be something up with my setup to be getting such a low\n> tps compared with you.\n>\n\nBoth installations are 9.2.4 and both db's have absolutely default\nconfigurations, i can't really explain why there is so much difference\nbetween our results, i can only imagine the initialization, thats why i\nasked how you populated your pgbench database (scale factor / fill factor).\n\nVasilis Ventirozos\n\n\n> On 8 April 2013 21:02, Vasilis Ventirozos <[email protected]> wrote:\n>\n>>\n>> -c 10 means 10 clients so that should take advantage of all your cores\n>> (see bellow)\n>>\n>> %Cpu0 : 39.3 us, 21.1 sy, 0.0 ni, 38.7 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st\n>> %Cpu1 : 38.0 us, 25.0 sy, 0.0 ni, 26.0 id, 4.2 wa, 0.0 hi, 6.8 si, 0.0 st\n>> %Cpu2 : 39.3 us, 20.4 sy, 0.0 ni, 39.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n>> %Cpu3 : 40.0 us, 18.7 sy, 0.0 ni, 40.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n>> %Cpu4 : 13.9 us, 7.1 sy, 0.0 ni, 79.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n>> %Cpu5 : 13.1 us, 8.4 sy, 0.0 ni, 78.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n>> %Cpu6 : 14.8 us, 6.4 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n>> %Cpu7 : 15.7 us, 6.7 sy, 0.0 ni, 77.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n>>\n>> i am pasting you the results of the same test on a i7-2600 16gb with a\n>> sata3 SSD and the results from a VM with 2 cores and a normal 7200 rpm hdd\n>>\n>> -- DESKTOP\n>> vasilis@Disorder ~ $ pgbench -c 10 -t 10000 bench\n>> starting vacuum...end.\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 1\n>> query mode: simple\n>> number of clients: 10\n>> number of threads: 1\n>> number of transactions per client: 10000\n>> number of transactions actually processed: 100000/100000\n>> tps = 1713.338111 (including connections establishing)\n>> tps = 1713.948478 (excluding connections establishing)\n>>\n>> -- VM\n>>\n>> postgres@pglab1:~/postgresql-9.2.4/contrib/pgbench$ ./pgbench -c 10 -t\n>> 10000 bench\n>> starting vacuum...end.\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 1\n>> query mode: simple\n>> number of clients: 10\n>> number of threads: 1\n>> number of transactions per client: 10000\n>> number of transactions actually processed: 100000/100000\n>> tps = 1118.976496 (including connections establishing)\n>> tps = 1119.180126 (excluding connections establishing)\n>>\n>> i am assuming that you didn't populate your pgbench db with the default\n>> values , if you tell me how you did i will be happy to re run the test and\n>> see the differences.\n>>\n>>\n>>\n>> On Mon, Apr 8, 2013 at 10:31 PM, Mark Davidson <[email protected]> wrote:\n>>\n>>> Thanks for your response Vasillis. I've run pgbench on both machines\n>>> `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local\n>>> machine and 23.825332 tps on the server so quite a significant difference.\n>>> Could this purely be down to the CPU clock speed or is it likely\n>>> something else causing the issue?\n>>> I have run ANALYZE on both databases and tried the queries a number of\n>>> times on each to make sure the results are consistent, this is the case.\n>>>\n>>>\n>>> On 8 April 2013 18:19, Vasilis Ventirozos <[email protected]>wrote:\n>>>\n>>>>\n>>>> Hello Mark,\n>>>> PostgreSQL currently doesn't support parallel query so a faster cpu\n>>>> even if it has less cores would be faster for a single query, about\n>>>> benchmarking you can try pgbench that you will find in the contrib,\n>>>> the execution plan may be different because of different statistics,\n>>>> have you analyzed both databases when you compared the execution plans ?\n>>>>\n>>>> Vasilis Ventirozos\n>>>>\n>>>>\n>>>> Been trying to progress with this today. Decided to setup the\n>>>>> database on my local machine to try a few things and I'm getting much more\n>>>>> sensible results and a totally different query plan\n>>>>> http://explain.depesz.com/s/KGd in this case the query took about a\n>>>>> minute but does sometimes take around 80 seconds.\n>>>>>\n>>>>> The config is exactly the same between the two database. The databases\n>>>>> them selves are identical with all indexes the same on the tables.\n>>>>>\n>>>>> The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB\n>>>>> RAM and the database is just on a SATA HDD which is a Western Digital\n>>>>> WD5000AAKS.\n>>>>> My desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the\n>>>>> database is running on a SATA HDD which is a Western Digital WD1002FAEX-0\n>>>>>\n>>>>> Could anyone offer any reasoning as to why the plan would be so\n>>>>> different across the two machines? I would have thought that the server\n>>>>> would perform a lot better since it has more cores or is postgres more\n>>>>> affected by the CPU speed? Could anyone suggest a way to bench mark the\n>>>>> machines for their postgres performance?\n>>>>>\n>>>>> Thanks again for everyones input,\n>>>>>\n>>>>> Mark\n>>>>>\n>>>>>\n>>>>> On 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\n>>>>>\n>>>>>> Takes a little longer with the INNER join unfortunately. Takes about\n>>>>>> ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl.\n>>>>>>\n>>>>>>\n>>>>>> With the JOIN there might not be a match if the data does not fall\n>>>>>> within one of the areas that is selected in the IN query.\n>>>>>>\n>>>>>> So if we have data id (10) that might fall in areas ( 1, 5, 8, 167 )\n>>>>>> but the user might be querying areas ( 200 ... 500 ) so no match in area\n>>>>>> would be found just to be absolutely clear.\n>>>>>>\n>>>>>> Is it worth considering adding additional statistics on any of the\n>>>>>> columns? And / Or additional INDEXES or different types INDEX? Would it be\n>>>>>> worth restructuring the query starting with areas and working to join data\n>>>>>> to that?\n>>>>>>\n>>>>>>\n>>>>>> On 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n>>>>>>\n>>>>>>> Greg Williamson <[email protected]> wrote:\n>>>>>>>\n>>>>>>> >> Thanks for your response. I tried doing what you suggested so\n>>>>>>> >> that table now has a primary key of\n>>>>>>> >> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>>>>>>> >> and I've added the INDEX of\n>>>>>>> >> 'CREATE INDEX data_area_data_id_index ON data_area USING btree\n>>>>>>> (data_id );'\n>>>>>>>\n>>>>>>> Yeah, that is what I was suggesting.\n>>>>>>>\n>>>>>>> >> unfortunately it hasn't resulted in an improvement of the query\n>>>>>>> >> performance.\n>>>>>>>\n>>>>>>> > Did you run analyze on the table after creating the index ?\n>>>>>>>\n>>>>>>> That probably isn't necessary. Statistics are normally on relations\n>>>>>>> and columns; there are only certain special cases where an ANALYZE\n>>>>>>> is needed after an index build, like if the index is on an\n>>>>>>> expression rather than a list of columns.\n>>>>>>>\n>>>>>>> Mark, what happens if you change that left join to a normal (inner)\n>>>>>>> join? Since you're doing an inner join to data_area and that has a\n>>>>>>> foreign key to area, there should always be a match anyway, right?\n>>>>>>> The optimizer doesn't recognize that, so it can't start from the\n>>>>>>> area and just match to the appropriate points.\n>>>>>>>\n>>>>>>> --\n>>>>>>> Kevin Grittner\n>>>>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>>>> The Enterprise PostgreSQL Company\n>>>>>>>\n>>>>>>\n>>>>>>\n>>>>>\n>>>>\n>>>\n>>\n>\n\nOn Mon, Apr 8, 2013 at 11:18 PM, Mark Davidson <[email protected]> wrote:\nWow my results are absolutely appalling compared to both of those which is really interesting. Are you running postgres 9.2.4 on both instances? Any specific configuration changes? \nThinking there must be something up with my setup to be getting such a low tps compared with you.Both installations are 9.2.4 and both db's have absolutely default configurations, i can't really explain why there is so much difference between our results, i can only imagine the initialization, thats why i asked how you populated your pgbench database (scale factor / fill factor).\nVasilis Ventirozos \nOn 8 April 2013 21:02, Vasilis Ventirozos <[email protected]> wrote:\n-c 10 means 10 clients so that should take advantage of all your cores (see bellow)%Cpu0 : 39.3 us, 21.1 sy, 0.0 ni, 38.7 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st\n\n\n%Cpu1 : 38.0 us, 25.0 sy, 0.0 ni, 26.0 id, 4.2 wa, 0.0 hi, 6.8 si, 0.0 st%Cpu2 : 39.3 us, 20.4 sy, 0.0 ni, 39.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu3 : 40.0 us, 18.7 sy, 0.0 ni, 40.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n\n\n%Cpu4 : 13.9 us, 7.1 sy, 0.0 ni, 79.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu5 : 13.1 us, 8.4 sy, 0.0 ni, 78.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu6 : 14.8 us, 6.4 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n\n\n%Cpu7 : 15.7 us, 6.7 sy, 0.0 ni, 77.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st i am pasting you the results of the same test on a i7-2600 16gb with a sata3 SSD and the results from a VM with 2 cores and a normal 7200 rpm hdd\n-- DESKTOPvasilis@Disorder ~ $ pgbench -c 10 -t 10000 benchstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 1query mode: simplenumber of clients: 10\n\n\nnumber of threads: 1number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 1713.338111 (including connections establishing)tps = 1713.948478 (excluding connections establishing)\n-- VMpostgres@pglab1:~/postgresql-9.2.4/contrib/pgbench$ ./pgbench -c 10 -t 10000 benchstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 1\n\n\nquery mode: simplenumber of clients: 10number of threads: 1number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 1118.976496 (including connections establishing)\n\n\ntps = 1119.180126 (excluding connections establishing)i am assuming that you didn't populate your pgbench db with the default values , if you tell me how you did i will be happy to re run the test and see the differences.\n\nOn Mon, Apr 8, 2013 at 10:31 PM, Mark Davidson <[email protected]> wrote:\nThanks for your response Vasillis. I've run pgbench on both machines `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local machine and 23.825332 tps on the server so quite a significant difference. \nCould this purely be down to the CPU clock speed or is it likely something else causing the issue?I have run ANALYZE on both databases and tried the queries a number of times on each to make sure the results are consistent, this is the case. \nOn 8 April 2013 18:19, Vasilis Ventirozos <[email protected]> wrote:\nHello Mark,PostgreSQL currently doesn't support parallel query so a faster cpu even if it has less cores would be faster for a single query, about benchmarking you can try pgbench that you will find in the contrib,\nthe execution plan may be different because of different statistics, have you analyzed both databases when you compared the execution plans ?\n\nVasilis Ventirozos\n\nBeen trying to progress with this today. Decided to setup the database on my local machine to try a few things and I'm getting much more sensible results and a totally different query plan http://explain.depesz.com/s/KGd in this case the query took about a minute but does sometimes take around 80 seconds. \nThe config is exactly the same between the two database. The databases them selves are identical with all indexes the same on the tables. The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM and the database is just on a SATA HDD which is a Western Digital WD5000AAKS.\nMy desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the database is running on a SATA HDD which is a Western Digital WD1002FAEX-0Could anyone offer any reasoning as to why the plan would be so different across the two machines? I would have thought that the server would perform a lot better since it has more cores or is postgres more affected by the CPU speed? Could anyone suggest a way to bench mark the machines for their postgres performance?\nThanks again for everyones input,Mark\nOn 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\nTakes a little longer with the INNER join unfortunately. Takes about ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl. \nWith the JOIN there might not be a match if the data does not fall within one of the areas that is selected in the IN query. \nSo if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but the user might be querying areas ( 200 ... 500 ) so no match in area would be found just to be absolutely clear. \n\nIs it worth considering adding additional statistics on any of the columns? And / Or additional INDEXES or different types INDEX? Would it be worth restructuring the query starting with areas and working to join data to that? \nOn 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n\nGreg Williamson <[email protected]> wrote:\n\n>> Thanks for your response. I tried doing what you suggested so\n>> that table now has a primary key of\n>> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>> and I've added the INDEX of\n>> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );'\n\nYeah, that is what I was suggesting.\n\n>> unfortunately it hasn't resulted in an improvement of the query\n>> performance.\n\n> Did you run analyze on the table after creating the index ?\n\nThat probably isn't necessary. Statistics are normally on relations\nand columns; there are only certain special cases where an ANALYZE\nis needed after an index build, like if the index is on an\nexpression rather than a list of columns.\n\nMark, what happens if you change that left join to a normal (inner)\njoin? Since you're doing an inner join to data_area and that has a\nforeign key to area, there should always be a match anyway, right?\nThe optimizer doesn't recognize that, so it can't start from the\narea and just match to the appropriate points.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 8 Apr 2013 23:28:08 +0300",
"msg_from": "Vasilis Ventirozos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "On Mon, Apr 8, 2013 at 12:31 PM, Mark Davidson <[email protected]> wrote:\n\n> Thanks for your response Vasillis. I've run pgbench on both machines\n> `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local\n> machine and 23.825332 tps on the server so quite a significant difference.\n>\n\nThese results are almost certainly being driven by how fast your machines\ncan fsync the WAL data. The type of query you originally posted does not\ncare about that at all, so these results are not useful to you. You could\nrun the \"pgbench -S\", which is getting closer to the nature of the work\nyour original query does (but still not all that close).\n\nCheers,\n\nJeff\n\nOn Mon, Apr 8, 2013 at 12:31 PM, Mark Davidson <[email protected]> wrote:\nThanks for your response Vasillis. I've run pgbench on both machines `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local machine and 23.825332 tps on the server so quite a significant difference. \nThese results are almost certainly being driven by how fast your machines can fsync the WAL data. The type of query you originally posted does not care about that at all, so these results are not useful to you. You could run the \"pgbench -S\", which is getting closer to the nature of the work your original query does (but still not all that close). \nCheers,Jeff",
"msg_date": "Mon, 8 Apr 2013 13:39:23 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "Sorry Vasillis I missed you asking that I just did './pgbench -i pgbench'\ndidn't specific set any values. I can try some specific ones if you can\nsuggest any.\n\n\nOn 8 April 2013 21:28, Vasilis Ventirozos <[email protected]> wrote:\n\n>\n>\n>\n> On Mon, Apr 8, 2013 at 11:18 PM, Mark Davidson <[email protected]> wrote:\n>\n>> Wow my results are absolutely appalling compared to both of those which\n>> is really interesting. Are you running postgres 9.2.4 on both instances?\n>> Any specific configuration changes?\n>> Thinking there must be something up with my setup to be getting such a\n>> low tps compared with you.\n>>\n>\n> Both installations are 9.2.4 and both db's have absolutely default\n> configurations, i can't really explain why there is so much difference\n> between our results, i can only imagine the initialization, thats why i\n> asked how you populated your pgbench database (scale factor / fill factor).\n>\n> Vasilis Ventirozos\n>\n>\n>> On 8 April 2013 21:02, Vasilis Ventirozos <[email protected]> wrote:\n>>\n>>>\n>>> -c 10 means 10 clients so that should take advantage of all your cores\n>>> (see bellow)\n>>>\n>>> %Cpu0 : 39.3 us, 21.1 sy, 0.0 ni, 38.7 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st\n>>> %Cpu1 : 38.0 us, 25.0 sy, 0.0 ni, 26.0 id, 4.2 wa, 0.0 hi, 6.8 si, 0.0 st\n>>> %Cpu2 : 39.3 us, 20.4 sy, 0.0 ni, 39.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n>>> %Cpu3 : 40.0 us, 18.7 sy, 0.0 ni, 40.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n>>> %Cpu4 : 13.9 us, 7.1 sy, 0.0 ni, 79.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n>>> %Cpu5 : 13.1 us, 8.4 sy, 0.0 ni, 78.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n>>> %Cpu6 : 14.8 us, 6.4 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n>>> %Cpu7 : 15.7 us, 6.7 sy, 0.0 ni, 77.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n>>>\n>>> i am pasting you the results of the same test on a i7-2600 16gb with a\n>>> sata3 SSD and the results from a VM with 2 cores and a normal 7200 rpm hdd\n>>>\n>>> -- DESKTOP\n>>> vasilis@Disorder ~ $ pgbench -c 10 -t 10000 bench\n>>> starting vacuum...end.\n>>> transaction type: TPC-B (sort of)\n>>> scaling factor: 1\n>>> query mode: simple\n>>> number of clients: 10\n>>> number of threads: 1\n>>> number of transactions per client: 10000\n>>> number of transactions actually processed: 100000/100000\n>>> tps = 1713.338111 (including connections establishing)\n>>> tps = 1713.948478 (excluding connections establishing)\n>>>\n>>> -- VM\n>>>\n>>> postgres@pglab1:~/postgresql-9.2.4/contrib/pgbench$ ./pgbench -c 10 -t\n>>> 10000 bench\n>>> starting vacuum...end.\n>>> transaction type: TPC-B (sort of)\n>>> scaling factor: 1\n>>> query mode: simple\n>>> number of clients: 10\n>>> number of threads: 1\n>>> number of transactions per client: 10000\n>>> number of transactions actually processed: 100000/100000\n>>> tps = 1118.976496 (including connections establishing)\n>>> tps = 1119.180126 (excluding connections establishing)\n>>>\n>>> i am assuming that you didn't populate your pgbench db with the default\n>>> values , if you tell me how you did i will be happy to re run the test and\n>>> see the differences.\n>>>\n>>>\n>>>\n>>> On Mon, Apr 8, 2013 at 10:31 PM, Mark Davidson <[email protected]> wrote:\n>>>\n>>>> Thanks for your response Vasillis. I've run pgbench on both machines\n>>>> `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local\n>>>> machine and 23.825332 tps on the server so quite a significant difference.\n>>>> Could this purely be down to the CPU clock speed or is it likely\n>>>> something else causing the issue?\n>>>> I have run ANALYZE on both databases and tried the queries a number of\n>>>> times on each to make sure the results are consistent, this is the case.\n>>>>\n>>>>\n>>>> On 8 April 2013 18:19, Vasilis Ventirozos <[email protected]>wrote:\n>>>>\n>>>>>\n>>>>> Hello Mark,\n>>>>> PostgreSQL currently doesn't support parallel query so a faster cpu\n>>>>> even if it has less cores would be faster for a single query, about\n>>>>> benchmarking you can try pgbench that you will find in the contrib,\n>>>>> the execution plan may be different because of different statistics,\n>>>>> have you analyzed both databases when you compared the execution plans ?\n>>>>>\n>>>>> Vasilis Ventirozos\n>>>>>\n>>>>>\n>>>>> Been trying to progress with this today. Decided to setup the\n>>>>>> database on my local machine to try a few things and I'm getting much more\n>>>>>> sensible results and a totally different query plan\n>>>>>> http://explain.depesz.com/s/KGd in this case the query took about a\n>>>>>> minute but does sometimes take around 80 seconds.\n>>>>>>\n>>>>>> The config is exactly the same between the two database. The\n>>>>>> databases them selves are identical with all indexes the same on the\n>>>>>> tables.\n>>>>>>\n>>>>>> The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB\n>>>>>> RAM and the database is just on a SATA HDD which is a Western Digital\n>>>>>> WD5000AAKS.\n>>>>>> My desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the\n>>>>>> database is running on a SATA HDD which is a Western Digital WD1002FAEX-0\n>>>>>>\n>>>>>> Could anyone offer any reasoning as to why the plan would be so\n>>>>>> different across the two machines? I would have thought that the server\n>>>>>> would perform a lot better since it has more cores or is postgres more\n>>>>>> affected by the CPU speed? Could anyone suggest a way to bench mark the\n>>>>>> machines for their postgres performance?\n>>>>>>\n>>>>>> Thanks again for everyones input,\n>>>>>>\n>>>>>> Mark\n>>>>>>\n>>>>>>\n>>>>>> On 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\n>>>>>>\n>>>>>>> Takes a little longer with the INNER join unfortunately. Takes about\n>>>>>>> ~3.5 minutes, here is the query plan\n>>>>>>> http://explain.depesz.com/s/EgBl.\n>>>>>>>\n>>>>>>> With the JOIN there might not be a match if the data does not fall\n>>>>>>> within one of the areas that is selected in the IN query.\n>>>>>>>\n>>>>>>> So if we have data id (10) that might fall in areas ( 1, 5, 8, 167 )\n>>>>>>> but the user might be querying areas ( 200 ... 500 ) so no match in area\n>>>>>>> would be found just to be absolutely clear.\n>>>>>>>\n>>>>>>> Is it worth considering adding additional statistics on any of the\n>>>>>>> columns? And / Or additional INDEXES or different types INDEX? Would it be\n>>>>>>> worth restructuring the query starting with areas and working to join data\n>>>>>>> to that?\n>>>>>>>\n>>>>>>>\n>>>>>>> On 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n>>>>>>>\n>>>>>>>> Greg Williamson <[email protected]> wrote:\n>>>>>>>>\n>>>>>>>> >> Thanks for your response. I tried doing what you suggested so\n>>>>>>>> >> that table now has a primary key of\n>>>>>>>> >> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>>>>>>>> >> and I've added the INDEX of\n>>>>>>>> >> 'CREATE INDEX data_area_data_id_index ON data_area USING btree\n>>>>>>>> (data_id );'\n>>>>>>>>\n>>>>>>>> Yeah, that is what I was suggesting.\n>>>>>>>>\n>>>>>>>> >> unfortunately it hasn't resulted in an improvement of the query\n>>>>>>>> >> performance.\n>>>>>>>>\n>>>>>>>> > Did you run analyze on the table after creating the index ?\n>>>>>>>>\n>>>>>>>> That probably isn't necessary. Statistics are normally on relations\n>>>>>>>> and columns; there are only certain special cases where an ANALYZE\n>>>>>>>> is needed after an index build, like if the index is on an\n>>>>>>>> expression rather than a list of columns.\n>>>>>>>>\n>>>>>>>> Mark, what happens if you change that left join to a normal (inner)\n>>>>>>>> join? Since you're doing an inner join to data_area and that has a\n>>>>>>>> foreign key to area, there should always be a match anyway, right?\n>>>>>>>> The optimizer doesn't recognize that, so it can't start from the\n>>>>>>>> area and just match to the appropriate points.\n>>>>>>>>\n>>>>>>>> --\n>>>>>>>> Kevin Grittner\n>>>>>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>>>>>> The Enterprise PostgreSQL Company\n>>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>\n>>>>>\n>>>>\n>>>\n>>\n>\n\nSorry Vasillis I missed you asking that I just did './pgbench -i pgbench' didn't specific set any values. I can try some specific ones if you can suggest any.\nOn 8 April 2013 21:28, Vasilis Ventirozos <[email protected]> wrote:\nOn Mon, Apr 8, 2013 at 11:18 PM, Mark Davidson <[email protected]> wrote:\nWow my results are absolutely appalling compared to both of those which is really interesting. Are you running postgres 9.2.4 on both instances? Any specific configuration changes? \nThinking there must be something up with my setup to be getting such a low tps compared with you.Both installations are 9.2.4 and both db's have absolutely default configurations, i can't really explain why there is so much difference between our results, i can only imagine the initialization, thats why i asked how you populated your pgbench database (scale factor / fill factor).\n\nVasilis Ventirozos \n\nOn 8 April 2013 21:02, Vasilis Ventirozos <[email protected]> wrote:\n-c 10 means 10 clients so that should take advantage of all your cores (see bellow)%Cpu0 : 39.3 us, 21.1 sy, 0.0 ni, 38.7 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st\n\n\n\n%Cpu1 : 38.0 us, 25.0 sy, 0.0 ni, 26.0 id, 4.2 wa, 0.0 hi, 6.8 si, 0.0 st%Cpu2 : 39.3 us, 20.4 sy, 0.0 ni, 39.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu3 : 40.0 us, 18.7 sy, 0.0 ni, 40.0 id, 1.3 wa, 0.0 hi, 0.0 si, 0.0 st\n\n\n\n%Cpu4 : 13.9 us, 7.1 sy, 0.0 ni, 79.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu5 : 13.1 us, 8.4 sy, 0.0 ni, 78.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu6 : 14.8 us, 6.4 sy, 0.0 ni, 78.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\n\n\n\n%Cpu7 : 15.7 us, 6.7 sy, 0.0 ni, 77.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st i am pasting you the results of the same test on a i7-2600 16gb with a sata3 SSD and the results from a VM with 2 cores and a normal 7200 rpm hdd\n-- DESKTOPvasilis@Disorder ~ $ pgbench -c 10 -t 10000 benchstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 1query mode: simplenumber of clients: 10\n\n\n\nnumber of threads: 1number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 1713.338111 (including connections establishing)tps = 1713.948478 (excluding connections establishing)\n-- VMpostgres@pglab1:~/postgresql-9.2.4/contrib/pgbench$ ./pgbench -c 10 -t 10000 benchstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 1\n\n\n\nquery mode: simplenumber of clients: 10number of threads: 1number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 1118.976496 (including connections establishing)\n\n\n\ntps = 1119.180126 (excluding connections establishing)i am assuming that you didn't populate your pgbench db with the default values , if you tell me how you did i will be happy to re run the test and see the differences.\n\nOn Mon, Apr 8, 2013 at 10:31 PM, Mark Davidson <[email protected]> wrote:\nThanks for your response Vasillis. I've run pgbench on both machines `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local machine and 23.825332 tps on the server so quite a significant difference. \nCould this purely be down to the CPU clock speed or is it likely something else causing the issue?I have run ANALYZE on both databases and tried the queries a number of times on each to make sure the results are consistent, this is the case. \nOn 8 April 2013 18:19, Vasilis Ventirozos <[email protected]> wrote:\nHello Mark,PostgreSQL currently doesn't support parallel query so a faster cpu even if it has less cores would be faster for a single query, about benchmarking you can try pgbench that you will find in the contrib,\nthe execution plan may be different because of different statistics, have you analyzed both databases when you compared the execution plans ?\n\nVasilis Ventirozos\n\nBeen trying to progress with this today. Decided to setup the database on my local machine to try a few things and I'm getting much more sensible results and a totally different query plan http://explain.depesz.com/s/KGd in this case the query took about a minute but does sometimes take around 80 seconds. \nThe config is exactly the same between the two database. The databases them selves are identical with all indexes the same on the tables. The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM and the database is just on a SATA HDD which is a Western Digital WD5000AAKS.\nMy desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the database is running on a SATA HDD which is a Western Digital WD1002FAEX-0Could anyone offer any reasoning as to why the plan would be so different across the two machines? I would have thought that the server would perform a lot better since it has more cores or is postgres more affected by the CPU speed? Could anyone suggest a way to bench mark the machines for their postgres performance?\nThanks again for everyones input,Mark\nOn 7 April 2013 23:22, Mark Davidson <[email protected]> wrote:\nTakes a little longer with the INNER join unfortunately. Takes about ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl. \nWith the JOIN there might not be a match if the data does not fall within one of the areas that is selected in the IN query. \nSo if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but the user might be querying areas ( 200 ... 500 ) so no match in area would be found just to be absolutely clear. \n\nIs it worth considering adding additional statistics on any of the columns? And / Or additional INDEXES or different types INDEX? Would it be worth restructuring the query starting with areas and working to join data to that? \nOn 7 April 2013 16:15, Kevin Grittner <[email protected]> wrote:\n\nGreg Williamson <[email protected]> wrote:\n\n>> Thanks for your response. I tried doing what you suggested so\n>> that table now has a primary key of\n>> ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); '\n>> and I've added the INDEX of\n>> 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );'\n\nYeah, that is what I was suggesting.\n\n>> unfortunately it hasn't resulted in an improvement of the query\n>> performance.\n\n> Did you run analyze on the table after creating the index ?\n\nThat probably isn't necessary. Statistics are normally on relations\nand columns; there are only certain special cases where an ANALYZE\nis needed after an index build, like if the index is on an\nexpression rather than a list of columns.\n\nMark, what happens if you change that left join to a normal (inner)\njoin? Since you're doing an inner join to data_area and that has a\nforeign key to area, there should always be a match anyway, right?\nThe optimizer doesn't recognize that, so it can't start from the\narea and just match to the appropriate points.\n\n--\nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 8 Apr 2013 21:57:04 +0100",
"msg_from": "Mark Davidson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "Hi Jeff,\n\nI'ved tried this test using the -S flag './pgbench -c 4 -j 2 -T 600 -S\npgbench'\n\nDesktop gives me\n\n./pgbench -c 4 -j 2 -T 600 -S pgbench\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 2\nduration: 600 s\nnumber of transactions actually processed: 35261835\ntps = 58769.715695 (including connections establishing)\ntps = 58770.258977 (excluding connections establishing)\n\nServer\n\n./pgbench -c 4 -j 2 -T 600 -S pgbench\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 2\nduration: 600 s\nnumber of transactions actually processed: 22642303\ntps = 37737.157641 (including connections establishing)\ntps = 37738.167325 (excluding connections establishing)\n\n\n\nOn 8 April 2013 21:39, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Apr 8, 2013 at 12:31 PM, Mark Davidson <[email protected]> wrote:\n>\n>> Thanks for your response Vasillis. I've run pgbench on both machines\n>> `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local\n>> machine and 23.825332 tps on the server so quite a significant difference.\n>>\n>\n> These results are almost certainly being driven by how fast your machines\n> can fsync the WAL data. The type of query you originally posted does not\n> care about that at all, so these results are not useful to you. You could\n> run the \"pgbench -S\", which is getting closer to the nature of the work\n> your original query does (but still not all that close).\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi Jeff,I'ved tried this test using the -S flag './pgbench -c 4 -j 2 -T 600 -S pgbench'Desktop gives me ./pgbench -c 4 -j 2 -T 600 -S pgbench\nstarting vacuum...end.transaction type: SELECT onlyscaling factor: 1query mode: simplenumber of clients: 4number of threads: 2duration: 600 snumber of transactions actually processed: 35261835\ntps = 58769.715695 (including connections establishing)tps = 58770.258977 (excluding connections establishing)Server ./pgbench -c 4 -j 2 -T 600 -S pgbenchstarting vacuum...end.transaction type: SELECT only\nscaling factor: 1query mode: simplenumber of clients: 4number of threads: 2duration: 600 snumber of transactions actually processed: 22642303tps = 37737.157641 (including connections establishing)\ntps = 37738.167325 (excluding connections establishing)On 8 April 2013 21:39, Jeff Janes <[email protected]> wrote:\nOn Mon, Apr 8, 2013 at 12:31 PM, Mark Davidson <[email protected]> wrote:\n\nThanks for your response Vasillis. I've run pgbench on both machines `./pgbench -c 10 -t 10000 pgbench` getting 99.800650 tps on my local machine and 23.825332 tps on the server so quite a significant difference. \nThese results are almost certainly being driven by how fast your machines can fsync the WAL data. The type of query you originally posted does not care about that at all, so these results are not useful to you. You could run the \"pgbench -S\", which is getting closer to the nature of the work your original query does (but still not all that close). \nCheers,Jeff",
"msg_date": "Mon, 8 Apr 2013 22:01:03 +0100",
"msg_from": "Mark Davidson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "On Fri, Apr 5, 2013 at 8:51 AM, Mark Davidson <[email protected]> wrote:\n\n> Hi All,\n>\n> Hoping someone can help me out with some performance issues I'm having\n> with the INDEX on my database. I've got a database that has a data table\n> containing ~55,000,000 rows which have point data and an area table\n> containing ~3,500 rows which have polygon data. A user queries the data by\n> selecting what areas they want to view and using some other filters such as\n> datatime and what datasets they want to query. This all works fine and\n> previously the intersect of the data rows to the areas was being done on\n> the fly with PostGIS ST_Intersects. However as the data table grow we\n> decided it would make sense to offload the data processing and not\n> calculate the intersect for a row on the fly each time, but to\n> pre-calculate it and store the result in the join table. Resultantly this\n> produce a table data_area which contains ~250,000,000 rows.\n>\n\n\nI think your old method is likely the better option, especially if the\nintersect can be offloaded to the client or app server (I don't know enough\nabout ST_Intersects to know how likely that is).\n\nWhat is the difference in performance between the old method and the new\nmethod?\n\nCheers,\n\nJeff\n\nOn Fri, Apr 5, 2013 at 8:51 AM, Mark Davidson <[email protected]> wrote:\nHi All, Hoping someone can help me out with some performance issues I'm having with the INDEX on my database. I've got a database that has a data table containing ~55,000,000 rows which have point data and an area table containing ~3,500 rows which have polygon data. A user queries the data by selecting what areas they want to view and using some other filters such as datatime and what datasets they want to query. This all works fine and previously the intersect of the data rows to the areas was being done on the fly with PostGIS ST_Intersects. However as the data table grow we decided it would make sense to offload the data processing and not calculate the intersect for a row on the fly each time, but to pre-calculate it and store the result in the join table. Resultantly this produce a table data_area which contains ~250,000,000 rows.\nI think your old method is likely the better option, especially if the intersect can be offloaded to the client or app server (I don't know enough about ST_Intersects to know how likely that is).\nWhat is the difference in performance between the old method and the new method?Cheers,Jeff",
"msg_date": "Mon, 15 Apr 2013 12:30:51 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "On Sun, Apr 7, 2013 at 3:22 PM, Mark Davidson <[email protected]> wrote:\n\n> Takes a little longer with the INNER join unfortunately. Takes about ~3.5\n> minutes, here is the query plan http://explain.depesz.com/s/EgBl.\n>\n> With the JOIN there might not be a match if the data does not fall within\n> one of the areas that is selected in the IN query.\n>\n> So if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but\n> the user might be querying areas ( 200 ... 500 ) so no match in area would\n> be found just to be absolutely clear.\n>\n\nI'm not clear on what you *want* to happen. Are you sure it works the way\nyou want it to now? If you want every specified id to return at least one\nrow even if there is no qualified area matching it, you have to move the\nLEFT JOIN one join to the left, and have to move the IN list criteria from\nthe WHERE to the JOIN.\n\nCheers,\n\nJeff\n\nOn Sun, Apr 7, 2013 at 3:22 PM, Mark Davidson <[email protected]> wrote:\nTakes a little longer with the INNER join unfortunately. Takes about ~3.5 minutes, here is the query plan http://explain.depesz.com/s/EgBl. \nWith the JOIN there might not be a match if the data does not fall within one of the areas that is selected in the IN query. \nSo if we have data id (10) that might fall in areas ( 1, 5, 8, 167 ) but the user might be querying areas ( 200 ... 500 ) so no match in area would be found just to be absolutely clear. \nI'm not clear on what you *want* to happen. Are you sure it works the way you want it to now? If you want every specified id to return at least one row even if there is no qualified area matching it, you have to move the LEFT JOIN one join to the left, and have to move the IN list criteria from the WHERE to the JOIN.\nCheers,Jeff",
"msg_date": "Mon, 15 Apr 2013 12:37:33 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INDEX Performance Issue"
},
{
"msg_contents": "On Mon, Apr 8, 2013 at 10:02 AM, Mark Davidson <[email protected]> wrote:\n\n> Been trying to progress with this today. Decided to setup the database on\n> my local machine to try a few things and I'm getting much more sensible\n> results and a totally different query plan http://explain.depesz.com/s/KGdin this case the query took about a minute but does sometimes take around\n> 80 seconds.\n>\n> The config is exactly the same between the two database. The databases\n> them selves are identical with all indexes the same on the tables.\n>\n> The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM\n> and the database is just on a SATA HDD which is a Western Digital\n> WD5000AAKS.\n> My desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the\n> database is running on a SATA HDD which is a Western Digital WD1002FAEX-0\n>\n> Could anyone offer any reasoning as to why the plan would be so different\n> across the two machines?\n>\n\n\nThe estimated costs of the two plans are very close to each other, so it\ndoesn't take much to cause a switch to happen.\n\nIs the test instance a binary copy of the production one (i.e. created from\na base backup) or is it only a logical copy (e.g. pg_dump followed by a\nrestore)? A logical copy will probably be more compact than the original\nand so will have different slightly estimates.\n\nYou could check pg_class for relpages on all relevant tables and indexes on\nboth servers.\n\nAlso, since ANALYZE uses a random sampling for large tables, the estimates\ncan move around just by chance. If you repeat the query several times with\nan ANALYZE in between, does the plan change, or if not how much does the\nestimated cost change within the plan? You could check pg_stats for the\nrelevant tables and columns between the two servers to see how similar they\nare.\n\nThe estimated cost of a hash join is very dependent on how frequent the\nmost common value of the hashed column is thought to be. And the estimate\nof this number can be very fragile if ANALYZE is based on a small fraction\nof the table. Turning up the statistics for those columns might be\nworthwhile.\n\nCheers,\n\nJeff\n\nOn Mon, Apr 8, 2013 at 10:02 AM, Mark Davidson <[email protected]> wrote:\nBeen trying to progress with this today. Decided to setup the database on my local machine to try a few things and I'm getting much more sensible results and a totally different query plan http://explain.depesz.com/s/KGd in this case the query took about a minute but does sometimes take around 80 seconds. \nThe config is exactly the same between the two database. The databases them selves are identical with all indexes the same on the tables. The server has an 2 x Intel Xeon E5420 running at 2.5Ghz each, 16GB RAM and the database is just on a SATA HDD which is a Western Digital WD5000AAKS.\nMy desktop has a single i5-3570K running at 3.4Ghz, 16GB RAM and the database is running on a SATA HDD which is a Western Digital WD1002FAEX-0Could anyone offer any reasoning as to why the plan would be so different across the two machines? \nThe estimated costs of the two plans are very close to each other, so it doesn't take much to cause a switch to happen.\nIs the test instance a binary copy of the production one (i.e. created from a base backup) or is it only a logical copy (e.g. pg_dump followed by a restore)? A logical copy will probably be more compact than the original and so will have different slightly estimates.\nYou could check pg_class for relpages on all relevant tables and indexes on both servers.Also, since ANALYZE uses a random sampling for large tables, the estimates can move around just by chance. If you repeat the query several times with an ANALYZE in between, does the plan change, or if not how much does the estimated cost change within the plan? You could check pg_stats for the relevant tables and columns between the two servers to see how similar they are.\nThe estimated cost of a hash join is very dependent on how frequent the most common value of the hashed column is thought to be. And the estimate of this number can be very fragile if ANALYZE is based on a small fraction of the table. Turning up the statistics for those columns might be worthwhile.\nCheers,Jeff",
"msg_date": "Mon, 15 Apr 2013 13:16:17 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: INDEX Performance Issue"
}
] |
[
{
"msg_contents": "On 9.2.4, running two identical queries except for the value of a column in\nthe WHERE clause. Postgres is picking very different query plans, the first\nis much slower than the second.\n\nAny ideas on how I can speed this up? I have btree indexes for all the\ncolumns used in the query.\n\nexplain analyze\n\nSELECT COUNT(*)\n\nFROM purchased_items pi\n\ninner join line_items li on li.id = pi.line_item_id\n\ninner join products on products.id = li.product_id\n\nWHERE products.drop_shipper_id = 221;\n\n Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual\ntime=2425.225..2425.225 rows=1 loops=1)\n -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0) (actual\ntime=726.612..2424.206 rows=8413 loops=1)\n Hash Cond: (pi.line_item_id = li.id)\n -> Seq Scan on purchased_items pi (cost=0.00..60912.39\nrows=3724639 width=4) (actual time=0.008..616.812 rows=3724639 loops=1)\n -> Hash (cost=77937.19..77937.19 rows=56499 width=4) (actual\ntime=726.231..726.231 rows=8178 loops=1)\n Buckets: 4096 Batches: 4 Memory Usage: 73kB\n -> Hash Join (cost=1684.33..77937.19 rows=56499 width=4)\n(actual time=1.270..723.222 rows=8178 loops=1)\n Hash Cond: (li.product_id = products.id)\n -> Seq Scan on line_items li (cost=0.00..65617.18\nrows=2685518 width=8) (actual time=0.081..392.926 rows=2685499 loops=1)\n -> Hash (cost=1676.60..1676.60 rows=618 width=4)\n(actual time=0.835..0.835 rows=618 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 22kB\n -> Bitmap Heap Scan on products\n (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752 rows=618\nloops=1)\n Recheck Cond: (drop_shipper_id = 221)\n -> Bitmap Index Scan on\nindex_products_on_drop_shipper_id (cost=0.00..12.92 rows=618 width=0)\n(actual time=0.125..0.125 rows=618 loops=1)\n Index Cond: (drop_shipper_id = 221)\n Total runtime: 2425.302 ms\n\n\nexplain analyze\n\nSELECT COUNT(*)\n\nFROM purchased_items pi\n\ninner join line_items li on li.id = pi.line_item_id\n\ninner join products on products.id = li.product_id\n\nWHERE products.drop_shipper_id = 2;\n\n\n\n\n Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual\ntime=0.906..0.906 rows=1 loops=1)\n -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0) (actual\ntime=0.029..0.877 rows=172 loops=1)\n -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4) (actual\ntime=0.021..0.383 rows=167 loops=1)\n -> Index Scan using index_products_on_drop_shipper_id on\nproducts (cost=0.00..80.41 rows=19 width=4) (actual time=0.010..0.074\nrows=70 loops=1)\n Index Cond: (drop_shipper_id = 2)\n -> Index Scan using index_line_items_on_product_id on\nline_items li (cost=0.00..835.70 rows=279 width=8) (actual\ntime=0.002..0.004 rows=2 loops=70)\n Index Cond: (product_id = products.id)\n -> Index Only Scan using purchased_items_line_item_id_idx on\npurchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual\ntime=0.002..0.003 rows=1 loops=167)\n Index Cond: (line_item_id = li.id)\n Heap Fetches: 5\n Total runtime: 0.955 ms\n(11 rows)\n\nOn 9.2.4, running two identical queries except for the value of a column in the WHERE clause. Postgres is picking very different query plans, the first is much slower than the second.Any ideas on how I can speed this up? I have btree indexes for all the columns used in the query.\nexplain analyze SELECT COUNT(*) FROM purchased_items pi \r\n\r\ninner join line_items li on li.id = pi.line_item_id inner join products on products.id = li.product_id \r\n\r\nWHERE products.drop_shipper_id = 221; Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual time=2425.225..2425.225 rows=1 loops=1) -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0) (actual time=726.612..2424.206 rows=8413 loops=1)\r\n\r\n Hash Cond: (pi.line_item_id = li.id) -> Seq Scan on purchased_items pi (cost=0.00..60912.39 rows=3724639 width=4) (actual time=0.008..616.812 rows=3724639 loops=1)\r\n -> Hash (cost=77937.19..77937.19 rows=56499 width=4) (actual time=726.231..726.231 rows=8178 loops=1)\r\n Buckets: 4096 Batches: 4 Memory Usage: 73kB -> Hash Join (cost=1684.33..77937.19 rows=56499 width=4) (actual time=1.270..723.222 rows=8178 loops=1) Hash Cond: (li.product_id = products.id)\r\n\r\n -> Seq Scan on line_items li (cost=0.00..65617.18 rows=2685518 width=8) (actual time=0.081..392.926 rows=2685499 loops=1) -> Hash (cost=1676.60..1676.60 rows=618 width=4) (actual time=0.835..0.835 rows=618 loops=1)\r\n\r\n Buckets: 1024 Batches: 1 Memory Usage: 22kB -> Bitmap Heap Scan on products (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752 rows=618 loops=1)\r\n\r\n Recheck Cond: (drop_shipper_id = 221) -> Bitmap Index Scan on index_products_on_drop_shipper_id (cost=0.00..12.92 rows=618 width=0) (actual time=0.125..0.125 rows=618 loops=1)\r\n\r\n Index Cond: (drop_shipper_id = 221) Total runtime: 2425.302 msexplain analyze \r\n\r\nSELECT COUNT(*) FROM purchased_items pi inner join line_items li on li.id = pi.line_item_id \r\n\r\ninner join products on products.id = li.product_id WHERE products.drop_shipper_id = 2; \r\n\r\n Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual time=0.906..0.906 rows=1 loops=1)\r\n\r\n -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0) (actual time=0.029..0.877 rows=172 loops=1) -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4) (actual time=0.021..0.383 rows=167 loops=1)\r\n\r\n -> Index Scan using index_products_on_drop_shipper_id on products (cost=0.00..80.41 rows=19 width=4) (actual time=0.010..0.074 rows=70 loops=1) Index Cond: (drop_shipper_id = 2)\r\n\r\n -> Index Scan using index_line_items_on_product_id on line_items li (cost=0.00..835.70 rows=279 width=8) (actual time=0.002..0.004 rows=2 loops=70) Index Cond: (product_id = products.id)\r\n\r\n -> Index Only Scan using purchased_items_line_item_id_idx on purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual time=0.002..0.003 rows=1 loops=167) Index Cond: (line_item_id = li.id)\r\n\r\n Heap Fetches: 5 Total runtime: 0.955 ms(11 rows)",
"msg_date": "Fri, 5 Apr 2013 18:38:01 -0700",
"msg_from": "Joe Van Dyk <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow joins?"
},
{
"msg_contents": "(\nhttps://gist.github.com/joevandyk/df0df703f3fda6d14ae1/raw/c15cae813913b7f8c35b24b467a0c732c0100d79/gistfile1.txtshows\na non-wrapped version of the queries and plan)\n\n\nOn Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk <[email protected]> wrote:\n\n> On 9.2.4, running two identical queries except for the value of a column\n> in the WHERE clause. Postgres is picking very different query plans, the\n> first is much slower than the second.\n>\n> Any ideas on how I can speed this up? I have btree indexes for all the\n> columns used in the query.\n>\n> explain analyze\n>\n> SELECT COUNT(*)\n>\n> FROM purchased_items pi\n>\n> inner join line_items li on li.id = pi.line_item_id\n>\n> inner join products on products.id = li.product_id\n>\n> WHERE products.drop_shipper_id = 221;\n>\n> Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual\n> time=2425.225..2425.225 rows=1 loops=1)\n> -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0) (actual\n> time=726.612..2424.206 rows=8413 loops=1)\n> Hash Cond: (pi.line_item_id = li.id)\n> -> Seq Scan on purchased_items pi (cost=0.00..60912.39\n> rows=3724639 width=4) (actual time=0.008..616.812 rows=3724639 loops=1)\n> -> Hash (cost=77937.19..77937.19 rows=56499 width=4) (actual\n> time=726.231..726.231 rows=8178 loops=1)\n> Buckets: 4096 Batches: 4 Memory Usage: 73kB\n> -> Hash Join (cost=1684.33..77937.19 rows=56499 width=4)\n> (actual time=1.270..723.222 rows=8178 loops=1)\n> Hash Cond: (li.product_id = products.id)\n> -> Seq Scan on line_items li (cost=0.00..65617.18\n> rows=2685518 width=8) (actual time=0.081..392.926 rows=2685499 loops=1)\n> -> Hash (cost=1676.60..1676.60 rows=618 width=4)\n> (actual time=0.835..0.835 rows=618 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 22kB\n> -> Bitmap Heap Scan on products\n> (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752 rows=618\n> loops=1)\n> Recheck Cond: (drop_shipper_id = 221)\n> -> Bitmap Index Scan on\n> index_products_on_drop_shipper_id (cost=0.00..12.92 rows=618 width=0)\n> (actual time=0.125..0.125 rows=618 loops=1)\n> Index Cond: (drop_shipper_id = 221)\n> Total runtime: 2425.302 ms\n>\n>\n> explain analyze\n>\n> SELECT COUNT(*)\n>\n> FROM purchased_items pi\n>\n> inner join line_items li on li.id = pi.line_item_id\n>\n> inner join products on products.id = li.product_id\n>\n> WHERE products.drop_shipper_id = 2;\n>\n>\n>\n>\n> Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual\n> time=0.906..0.906 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0) (actual\n> time=0.029..0.877 rows=172 loops=1)\n> -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4) (actual\n> time=0.021..0.383 rows=167 loops=1)\n> -> Index Scan using index_products_on_drop_shipper_id on\n> products (cost=0.00..80.41 rows=19 width=4) (actual time=0.010..0.074\n> rows=70 loops=1)\n> Index Cond: (drop_shipper_id = 2)\n> -> Index Scan using index_line_items_on_product_id on\n> line_items li (cost=0.00..835.70 rows=279 width=8) (actual\n> time=0.002..0.004 rows=2 loops=70)\n> Index Cond: (product_id = products.id)\n> -> Index Only Scan using purchased_items_line_item_id_idx on\n> purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual\n> time=0.002..0.003 rows=1 loops=167)\n> Index Cond: (line_item_id = li.id)\n> Heap Fetches: 5\n> Total runtime: 0.955 ms\n> (11 rows)\n>\n\n(https://gist.github.com/joevandyk/df0df703f3fda6d14ae1/raw/c15cae813913b7f8c35b24b467a0c732c0100d79/gistfile1.txt shows a non-wrapped version of the queries and plan)\nOn Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk <[email protected]> wrote:\nOn 9.2.4, running two identical queries except for the value of a column in the WHERE clause. Postgres is picking very different query plans, the first is much slower than the second.Any ideas on how I can speed this up? I have btree indexes for all the columns used in the query.\nexplain analyze SELECT COUNT(*) FROM purchased_items pi \r\n\r\n\r\ninner join line_items li on li.id = pi.line_item_id inner join products on products.id = li.product_id \r\n\r\n\r\nWHERE products.drop_shipper_id = 221; Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual time=2425.225..2425.225 rows=1 loops=1) -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0) (actual time=726.612..2424.206 rows=8413 loops=1)\r\n\r\n\r\n Hash Cond: (pi.line_item_id = li.id) -> Seq Scan on purchased_items pi (cost=0.00..60912.39 rows=3724639 width=4) (actual time=0.008..616.812 rows=3724639 loops=1)\r\n\r\n -> Hash (cost=77937.19..77937.19 rows=56499 width=4) (actual time=726.231..726.231 rows=8178 loops=1)\r\n Buckets: 4096 Batches: 4 Memory Usage: 73kB -> Hash Join (cost=1684.33..77937.19 rows=56499 width=4) (actual time=1.270..723.222 rows=8178 loops=1) Hash Cond: (li.product_id = products.id)\r\n\r\n\r\n -> Seq Scan on line_items li (cost=0.00..65617.18 rows=2685518 width=8) (actual time=0.081..392.926 rows=2685499 loops=1) -> Hash (cost=1676.60..1676.60 rows=618 width=4) (actual time=0.835..0.835 rows=618 loops=1)\r\n\r\n\r\n Buckets: 1024 Batches: 1 Memory Usage: 22kB -> Bitmap Heap Scan on products (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752 rows=618 loops=1)\r\n\r\n\r\n Recheck Cond: (drop_shipper_id = 221) -> Bitmap Index Scan on index_products_on_drop_shipper_id (cost=0.00..12.92 rows=618 width=0) (actual time=0.125..0.125 rows=618 loops=1)\r\n\r\n\r\n Index Cond: (drop_shipper_id = 221) Total runtime: 2425.302 msexplain analyze \r\n\r\n\r\nSELECT COUNT(*) FROM purchased_items pi inner join line_items li on li.id = pi.line_item_id \r\n\r\n\r\ninner join products on products.id = li.product_id WHERE products.drop_shipper_id = 2; \r\n\r\n\r\n Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual time=0.906..0.906 rows=1 loops=1)\r\n\r\n\r\n -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0) (actual time=0.029..0.877 rows=172 loops=1) -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4) (actual time=0.021..0.383 rows=167 loops=1)\r\n\r\n\r\n -> Index Scan using index_products_on_drop_shipper_id on products (cost=0.00..80.41 rows=19 width=4) (actual time=0.010..0.074 rows=70 loops=1) Index Cond: (drop_shipper_id = 2)\r\n\r\n\r\n -> Index Scan using index_line_items_on_product_id on line_items li (cost=0.00..835.70 rows=279 width=8) (actual time=0.002..0.004 rows=2 loops=70) Index Cond: (product_id = products.id)\r\n\r\n\r\n -> Index Only Scan using purchased_items_line_item_id_idx on purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual time=0.002..0.003 rows=1 loops=167) Index Cond: (line_item_id = li.id)\r\n\r\n\r\n Heap Fetches: 5 Total runtime: 0.955 ms(11 rows)",
"msg_date": "Fri, 5 Apr 2013 18:42:29 -0700",
"msg_from": "Joe Van Dyk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow joins?"
},
{
"msg_contents": "If I disable sequential scans, hash joins, and merge joins, the query plans\nbecome the same and performance on the first slow one is much improved.\n\nIs there something else I can do to avoid this problem?\n\nbelow also at\nhttps://gist.github.com/joevandyk/34e31b3ad5cccb730a50/raw/8081a4298ba50ac93a86df97c1d0aae482ee7d2d/gistfile1.txt\n\n Aggregate (cost=869360.53..869360.54 rows=1 width=0) (actual\ntime=103.102..103.102 rows=1 loops=1)\n -> Nested Loop (cost=0.00..869164.63 rows=78360 width=0) (actual\ntime=0.253..101.708 rows=8413 loops=1)\n -> Nested Loop (cost=0.00..438422.95 rows=56499 width=4) (actual\ntime=0.157..51.766 rows=8178 loops=1)\n -> Index Scan using index_products_on_drop_shipper_id on\nproducts (cost=0.00..2312.56 rows=618 width=4) (actual time=0.087..6.318\nrows=618 loops=1)\n Index Cond: (drop_shipper_id = 221)\n -> Index Scan using index_line_items_on_product_id on\nline_items li (cost=0.00..702.89 rows=279 width=8) (actual\ntime=0.010..0.069 rows=13 loops=618)\n Index Cond: (product_id = products.id)\n -> Index Only Scan using purchased_items_line_item_id_idx on\npurchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual\ntime=0.005..0.005 rows=1 loops=8178)\n Index Cond: (line_item_id = li.id)\n Heap Fetches: 144\n Total runtime: 103.442 ms\n(11 rows)\n\n\n\nOn Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk <[email protected]> wrote:\n\n> On 9.2.4, running two identical queries except for the value of a column\n> in the WHERE clause. Postgres is picking very different query plans, the\n> first is much slower than the second.\n>\n> Any ideas on how I can speed this up? I have btree indexes for all the\n> columns used in the query.\n>\n> explain analyze\n>\n> SELECT COUNT(*)\n>\n> FROM purchased_items pi\n>\n> inner join line_items li on li.id = pi.line_item_id\n>\n> inner join products on products.id = li.product_id\n>\n> WHERE products.drop_shipper_id = 221;\n>\n> Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual\n> time=2425.225..2425.225 rows=1 loops=1)\n> -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0) (actual\n> time=726.612..2424.206 rows=8413 loops=1)\n> Hash Cond: (pi.line_item_id = li.id)\n> -> Seq Scan on purchased_items pi (cost=0.00..60912.39\n> rows=3724639 width=4) (actual time=0.008..616.812 rows=3724639 loops=1)\n> -> Hash (cost=77937.19..77937.19 rows=56499 width=4) (actual\n> time=726.231..726.231 rows=8178 loops=1)\n> Buckets: 4096 Batches: 4 Memory Usage: 73kB\n> -> Hash Join (cost=1684.33..77937.19 rows=56499 width=4)\n> (actual time=1.270..723.222 rows=8178 loops=1)\n> Hash Cond: (li.product_id = products.id)\n> -> Seq Scan on line_items li (cost=0.00..65617.18\n> rows=2685518 width=8) (actual time=0.081..392.926 rows=2685499 loops=1)\n> -> Hash (cost=1676.60..1676.60 rows=618 width=4)\n> (actual time=0.835..0.835 rows=618 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 22kB\n> -> Bitmap Heap Scan on products\n> (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752 rows=618\n> loops=1)\n> Recheck Cond: (drop_shipper_id = 221)\n> -> Bitmap Index Scan on\n> index_products_on_drop_shipper_id (cost=0.00..12.92 rows=618 width=0)\n> (actual time=0.125..0.125 rows=618 loops=1)\n> Index Cond: (drop_shipper_id = 221)\n> Total runtime: 2425.302 ms\n>\n>\n> explain analyze\n>\n> SELECT COUNT(*)\n>\n> FROM purchased_items pi\n>\n> inner join line_items li on li.id = pi.line_item_id\n>\n> inner join products on products.id = li.product_id\n>\n> WHERE products.drop_shipper_id = 2;\n>\n>\n>\n>\n> Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual\n> time=0.906..0.906 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0) (actual\n> time=0.029..0.877 rows=172 loops=1)\n> -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4) (actual\n> time=0.021..0.383 rows=167 loops=1)\n> -> Index Scan using index_products_on_drop_shipper_id on\n> products (cost=0.00..80.41 rows=19 width=4) (actual time=0.010..0.074\n> rows=70 loops=1)\n> Index Cond: (drop_shipper_id = 2)\n> -> Index Scan using index_line_items_on_product_id on\n> line_items li (cost=0.00..835.70 rows=279 width=8) (actual\n> time=0.002..0.004 rows=2 loops=70)\n> Index Cond: (product_id = products.id)\n> -> Index Only Scan using purchased_items_line_item_id_idx on\n> purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual\n> time=0.002..0.003 rows=1 loops=167)\n> Index Cond: (line_item_id = li.id)\n> Heap Fetches: 5\n> Total runtime: 0.955 ms\n> (11 rows)\n>\n\nIf I disable sequential scans, hash joins, and merge joins, the query plans become the same and performance on the first slow one is much improved. Is there something else I can do to avoid this problem?\nbelow also at https://gist.github.com/joevandyk/34e31b3ad5cccb730a50/raw/8081a4298ba50ac93a86df97c1d0aae482ee7d2d/gistfile1.txt\n Aggregate (cost=869360.53..869360.54 rows=1 width=0) (actual time=103.102..103.102 rows=1 loops=1) -> Nested Loop (cost=0.00..869164.63 rows=78360 width=0) (actual time=0.253..101.708 rows=8413 loops=1)\n -> Nested Loop (cost=0.00..438422.95 rows=56499 width=4) (actual time=0.157..51.766 rows=8178 loops=1) -> Index Scan using index_products_on_drop_shipper_id on products (cost=0.00..2312.56 rows=618 width=4) (actual time=0.087..6.318 rows=618 loops=1)\n Index Cond: (drop_shipper_id = 221) -> Index Scan using index_line_items_on_product_id on line_items li (cost=0.00..702.89 rows=279 width=8) (actual time=0.010..0.069 rows=13 loops=618)\n Index Cond: (product_id = products.id) -> Index Only Scan using purchased_items_line_item_id_idx on purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual time=0.005..0.005 rows=1 loops=8178)\n Index Cond: (line_item_id = li.id) Heap Fetches: 144 Total runtime: 103.442 ms(11 rows)\n\nOn Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk <[email protected]> wrote:\nOn 9.2.4, running two identical queries except for the value of a column in the WHERE clause. Postgres is picking very different query plans, the first is much slower than the second.Any ideas on how I can speed this up? I have btree indexes for all the columns used in the query.\nexplain analyze SELECT COUNT(*) FROM purchased_items pi \r\n\r\n\r\ninner join line_items li on li.id = pi.line_item_id inner join products on products.id = li.product_id \r\n\r\n\r\nWHERE products.drop_shipper_id = 221; Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual time=2425.225..2425.225 rows=1 loops=1) -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0) (actual time=726.612..2424.206 rows=8413 loops=1)\r\n\r\n\r\n Hash Cond: (pi.line_item_id = li.id) -> Seq Scan on purchased_items pi (cost=0.00..60912.39 rows=3724639 width=4) (actual time=0.008..616.812 rows=3724639 loops=1)\r\n\r\n -> Hash (cost=77937.19..77937.19 rows=56499 width=4) (actual time=726.231..726.231 rows=8178 loops=1)\r\n Buckets: 4096 Batches: 4 Memory Usage: 73kB -> Hash Join (cost=1684.33..77937.19 rows=56499 width=4) (actual time=1.270..723.222 rows=8178 loops=1) Hash Cond: (li.product_id = products.id)\r\n\r\n\r\n -> Seq Scan on line_items li (cost=0.00..65617.18 rows=2685518 width=8) (actual time=0.081..392.926 rows=2685499 loops=1) -> Hash (cost=1676.60..1676.60 rows=618 width=4) (actual time=0.835..0.835 rows=618 loops=1)\r\n\r\n\r\n Buckets: 1024 Batches: 1 Memory Usage: 22kB -> Bitmap Heap Scan on products (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752 rows=618 loops=1)\r\n\r\n\r\n Recheck Cond: (drop_shipper_id = 221) -> Bitmap Index Scan on index_products_on_drop_shipper_id (cost=0.00..12.92 rows=618 width=0) (actual time=0.125..0.125 rows=618 loops=1)\r\n\r\n\r\n Index Cond: (drop_shipper_id = 221) Total runtime: 2425.302 msexplain analyze \r\n\r\n\r\nSELECT COUNT(*) FROM purchased_items pi inner join line_items li on li.id = pi.line_item_id \r\n\r\n\r\ninner join products on products.id = li.product_id WHERE products.drop_shipper_id = 2; \r\n\r\n\r\n Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual time=0.906..0.906 rows=1 loops=1)\r\n\r\n\r\n -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0) (actual time=0.029..0.877 rows=172 loops=1) -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4) (actual time=0.021..0.383 rows=167 loops=1)\r\n\r\n\r\n -> Index Scan using index_products_on_drop_shipper_id on products (cost=0.00..80.41 rows=19 width=4) (actual time=0.010..0.074 rows=70 loops=1) Index Cond: (drop_shipper_id = 2)\r\n\r\n\r\n -> Index Scan using index_line_items_on_product_id on line_items li (cost=0.00..835.70 rows=279 width=8) (actual time=0.002..0.004 rows=2 loops=70) Index Cond: (product_id = products.id)\r\n\r\n\r\n -> Index Only Scan using purchased_items_line_item_id_idx on purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual time=0.002..0.003 rows=1 loops=167) Index Cond: (line_item_id = li.id)\r\n\r\n\r\n Heap Fetches: 5 Total runtime: 0.955 ms(11 rows)",
"msg_date": "Fri, 5 Apr 2013 18:50:30 -0700",
"msg_from": "Joe Van Dyk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow joins?"
},
{
"msg_contents": "Joe --\n\n>________________________________\n> From: Joe Van Dyk <[email protected]>\n>To: [email protected] \n>Sent: Friday, April 5, 2013 6:42 PM\n>Subject: Re: [PERFORM] slow joins?\n> \n>\n>(https://gist.github.com/joevandyk/df0df703f3fda6d14ae1/raw/c15cae813913b7f8c35b24b467a0c732c0100d79/gistfile1.txt shows a non-wrapped version of the queries and plan)\n>\n>\n>\n>\n>On Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk <[email protected]> wrote:\n>\n>On 9.2.4, running two identical queries except for the value of a column in the WHERE clause. Postgres is picking very different query plans, the first is much slower than the second.\n>>\n>>\n>>Any ideas on how I can speed this up? I have btree indexes for all the columns used in the query.\n>>\n>>explain analyze \n>>SELECT COUNT(*) \n>>FROM purchased_items pi \n>>inner join line_items li on li.id = pi.line_item_id \n>>inner join products on products.id = li.product_id \n>>WHERE products.drop_shipper_id = 221;\n>>\n>> Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual time=2425.225..2425.225 rows=1 loops=1)\n>> -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0) (actual time=726.612..2424.206 rows=8413 loops=1)\n>> Hash Cond: (pi.line_item_id = li.id)\n>> -> Seq Scan on purchased_items pi (cost=0.00..60912.39 rows=3724639 width=4) (actual time=0.008..616.812 rows=3724639 loops=1)\n>> -> Hash (cost=77937.19..77937.19 rows=56499 width=4) (actual time=726.231..726.231 rows=8178 loops=1)\n>> Buckets: 4096 Batches: 4 Memory Usage: 73kB\n>> -> Hash Join (cost=1684.33..77937.19 rows=56499 width=4) (actual time=1.270..723.222 rows=8178 loops=1)\n>> Hash Cond: (li.product_id = products.id)\n>> -> Seq Scan on line_items li (cost=0.00..65617.18 rows=2685518 width=8) (actual time=0.081..392.926 rows=2685499 loops=1)\n>> -> Hash (cost=1676.60..1676.60 rows=618 width=4) (actual time=0.835..0.835 rows=618 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 22kB\n>> -> Bitmap Heap Scan on products (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752 rows=618 loops=1)\n>> Recheck Cond: (drop_shipper_id = 221)\n>> -> Bitmap Index Scan on index_products_on_drop_shipper_id (cost=0.00..12.92 rows=618 width=0) (actual time=0.125..0.125 rows=618 loops=1)\n>> Index Cond: (drop_shipper_id = 221)\n>> Total runtime: 2425.302 ms\n>>\n>>\n>>explain analyze \n>>SELECT COUNT(*) \n>>FROM purchased_items pi \n>>inner join line_items li on li.id = pi.line_item_id \n>>inner join products on products.id = li.product_id \n>>WHERE products.drop_shipper_id = 2; \n>> \n>>\n>> Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual time=0.906..0.906 rows=1 loops=1)\n>> -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0) (actual time=0.029..0.877 rows=172 loops=1)\n>> -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4) (actual time=0.021..0.383 rows=167 loops=1)\n>> -> Index Scan using index_products_on_drop_shipper_id on products (cost=0.00..80.41 rows=19 width=4) (actual time=0.010..0.074 rows=70 loops=1)\n>> Index Cond: (drop_shipper_id = 2)\n>> -> Index Scan using index_line_items_on_product_id on line_items li (cost=0.00..835.70 rows=279 width=8) (actual time=0.002..0.004 rows=2 loops=70)\n>> Index Cond: (product_id = products.id)\n>> -> Index Only Scan using purchased_items_line_item_id_idx on purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual time=0.002..0.003 rows=1 loops=167)\n>> Index Cond: (line_item_id = li.id)\n>> Heap Fetches: 5\n>> Total runtime: 0.955 ms\n>>(11 rows)\n>>\n>\n\n\nDoes drop_shipper+id have a much larger number of rows which is making the scanner want to avoid an indexed scan or otherwise prefer a sequential scan on products and on line_items ?\n\nWhat are the stats settings for these tables ?\n\nHTH,\n\nGreg WIlliamson\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 5 Apr 2013 18:54:10 -0700 (PDT)",
"msg_from": "Greg Williamson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow joins?"
},
{
"msg_contents": "On Fri, Apr 5, 2013 at 6:54 PM, Greg Williamson <[email protected]>wrote:\n\n> Joe --\n>\n> >________________________________\n> > From: Joe Van Dyk <[email protected]>\n> >To: [email protected]\n> >Sent: Friday, April 5, 2013 6:42 PM\n> >Subject: Re: [PERFORM] slow joins?\n> >\n> >\n> >(\n> https://gist.github.com/joevandyk/df0df703f3fda6d14ae1/raw/c15cae813913b7f8c35b24b467a0c732c0100d79/gistfile1.txtshows a non-wrapped version of the queries and plan)\n> >\n> >\n> >\n> >\n> >On Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk <[email protected]> wrote:\n> >\n> >On 9.2.4, running two identical queries except for the value of a column\n> in the WHERE clause. Postgres is picking very different query plans, the\n> first is much slower than the second.\n> >>\n> >>\n> >>Any ideas on how I can speed this up? I have btree indexes for all the\n> columns used in the query.\n> >>\n> >>explain analyze\n>\n> >>SELECT COUNT(*)\n>\n> >>FROM purchased_items pi\n>\n> >>inner join line_items li on li.id = pi.line_item_id\n>\n> >>inner join products on products.id = li.product_id\n>\n> >>WHERE products.drop_shipper_id = 221;\n> >>\n> >> Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual\n> time=2425.225..2425.225 rows=1 loops=1)\n> >> -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0) (actual\n> time=726.612..2424.206 rows=8413 loops=1)\n> >> Hash Cond: (pi.line_item_id = li.id)\n> >> -> Seq Scan on purchased_items pi (cost=0.00..60912.39\n> rows=3724639 width=4) (actual time=0.008..616.812 rows=3724639 loops=1)\n> >> -> Hash (cost=77937.19..77937.19 rows=56499 width=4) (actual\n> time=726.231..726.231 rows=8178 loops=1)\n> >> Buckets: 4096 Batches: 4 Memory Usage: 73kB\n> >> -> Hash Join (cost=1684.33..77937.19 rows=56499\n> width=4) (actual time=1.270..723.222 rows=8178 loops=1)\n> >> Hash Cond: (li.product_id = products.id)\n> >> -> Seq Scan on line_items li (cost=0.00..65617.18\n> rows=2685518 width=8) (actual time=0.081..392.926 rows=2685499 loops=1)\n> >> -> Hash (cost=1676.60..1676.60 rows=618 width=4)\n> (actual time=0.835..0.835 rows=618 loops=1)\n> >> Buckets: 1024 Batches: 1 Memory Usage: 22kB\n> >> -> Bitmap Heap Scan on products\n> (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752 rows=618\n> loops=1)\n> >> Recheck Cond: (drop_shipper_id = 221)\n> >> -> Bitmap Index Scan on\n> index_products_on_drop_shipper_id (cost=0.00..12.92 rows=618 width=0)\n> (actual time=0.125..0.125 rows=618 loops=1)\n> >> Index Cond: (drop_shipper_id =\n> 221)\n> >> Total runtime: 2425.302 ms\n> >>\n> >>\n> >>explain analyze\n>\n> >>SELECT COUNT(*)\n>\n> >>FROM purchased_items pi\n>\n> >>inner join line_items li on li.id = pi.line_item_id\n>\n> >>inner join products on products.id = li.product_id\n>\n> >>WHERE products.drop_shipper_id = 2;\n>\n> >>\n>\n> >>\n> >> Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual\n> time=0.906..0.906 rows=1 loops=1)\n> >> -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0) (actual\n> time=0.029..0.877 rows=172 loops=1)\n> >> -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4)\n> (actual time=0.021..0.383 rows=167 loops=1)\n> >> -> Index Scan using index_products_on_drop_shipper_id on\n> products (cost=0.00..80.41 rows=19 width=4) (actual time=0.010..0.074\n> rows=70 loops=1)\n> >> Index Cond: (drop_shipper_id = 2)\n> >> -> Index Scan using index_line_items_on_product_id on\n> line_items li (cost=0.00..835.70 rows=279 width=8) (actual\n> time=0.002..0.004 rows=2 loops=70)\n> >> Index Cond: (product_id = products.id)\n> >> -> Index Only Scan using purchased_items_line_item_id_idx on\n> purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual\n> time=0.002..0.003 rows=1 loops=167)\n> >> Index Cond: (line_item_id = li.id)\n> >> Heap Fetches: 5\n> >> Total runtime: 0.955 ms\n> >>(11 rows)\n> >>\n> >\n>\n>\n> Does drop_shipper+id have a much larger number of rows which is making the\n> scanner want to avoid an indexed scan or otherwise prefer a sequential scan\n> on products and on line_items ?\n>\n\nAssuming you mean products.drop_shipper_id? There are more rows matched for\nthe first one vs the second one.\n70 products rows match drop_shipper_id=2, 618 match drop_shipper_id=221.\n\n\n> What are the stats settings for these tables ?\n>\n\nWhatever the defaults are.\n\n\n>\n> HTH,\n>\n> Greg WIlliamson\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Fri, Apr 5, 2013 at 6:54 PM, Greg Williamson <[email protected]> wrote:\nJoe --\n\n>________________________________\n> From: Joe Van Dyk <[email protected]>\n>To: [email protected]\n>Sent: Friday, April 5, 2013 6:42 PM\n>Subject: Re: [PERFORM] slow joins?\n>\n>\n>(https://gist.github.com/joevandyk/df0df703f3fda6d14ae1/raw/c15cae813913b7f8c35b24b467a0c732c0100d79/gistfile1.txt shows a non-wrapped version of the queries and plan)\n\n\n>\n>\n>\n>\n>On Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk <[email protected]> wrote:\n>\n>On 9.2.4, running two identical queries except for the value of a column in the WHERE clause. Postgres is picking very different query plans, the first is much slower than the second.\n>>\n>>\n>>Any ideas on how I can speed this up? I have btree indexes for all the columns used in the query.\n>>\n>>explain analyze \n>>SELECT COUNT(*) \n>>FROM purchased_items pi \n>>inner join line_items li on li.id = pi.line_item_id \n>>inner join products on products.id = li.product_id \n>>WHERE products.drop_shipper_id = 221;\n>>\n>> Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual time=2425.225..2425.225 rows=1 loops=1)\n>> -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0) (actual time=726.612..2424.206 rows=8413 loops=1)\n>> Hash Cond: (pi.line_item_id = li.id)\n>> -> Seq Scan on purchased_items pi (cost=0.00..60912.39 rows=3724639 width=4) (actual time=0.008..616.812 rows=3724639 loops=1)\n>> -> Hash (cost=77937.19..77937.19 rows=56499 width=4) (actual time=726.231..726.231 rows=8178 loops=1)\n>> Buckets: 4096 Batches: 4 Memory Usage: 73kB\n>> -> Hash Join (cost=1684.33..77937.19 rows=56499 width=4) (actual time=1.270..723.222 rows=8178 loops=1)\n>> Hash Cond: (li.product_id = products.id)\n>> -> Seq Scan on line_items li (cost=0.00..65617.18 rows=2685518 width=8) (actual time=0.081..392.926 rows=2685499 loops=1)\n>> -> Hash (cost=1676.60..1676.60 rows=618 width=4) (actual time=0.835..0.835 rows=618 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 22kB\n>> -> Bitmap Heap Scan on products (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752 rows=618 loops=1)\n>> Recheck Cond: (drop_shipper_id = 221)\n>> -> Bitmap Index Scan on index_products_on_drop_shipper_id (cost=0.00..12.92 rows=618 width=0) (actual time=0.125..0.125 rows=618 loops=1)\n>> Index Cond: (drop_shipper_id = 221)\n>> Total runtime: 2425.302 ms\n>>\n>>\n>>explain analyze \n>>SELECT COUNT(*) \n>>FROM purchased_items pi \n>>inner join line_items li on li.id = pi.line_item_id \n>>inner join products on products.id = li.product_id \n>>WHERE products.drop_shipper_id = 2; \n>> \n>>\n>> Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual time=0.906..0.906 rows=1 loops=1)\n>> -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0) (actual time=0.029..0.877 rows=172 loops=1)\n>> -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4) (actual time=0.021..0.383 rows=167 loops=1)\n>> -> Index Scan using index_products_on_drop_shipper_id on products (cost=0.00..80.41 rows=19 width=4) (actual time=0.010..0.074 rows=70 loops=1)\n>> Index Cond: (drop_shipper_id = 2)\n>> -> Index Scan using index_line_items_on_product_id on line_items li (cost=0.00..835.70 rows=279 width=8) (actual time=0.002..0.004 rows=2 loops=70)\n>> Index Cond: (product_id = products.id)\n>> -> Index Only Scan using purchased_items_line_item_id_idx on purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual time=0.002..0.003 rows=1 loops=167)\n>> Index Cond: (line_item_id = li.id)\n>> Heap Fetches: 5\n>> Total runtime: 0.955 ms\n>>(11 rows)\n>>\n>\n\n\nDoes drop_shipper+id have a much larger number of rows which is making the scanner want to avoid an indexed scan or otherwise prefer a sequential scan on products and on line_items ?\nAssuming you mean products.drop_shipper_id? There are more rows matched for the first one vs the second one. 70 products rows match drop_shipper_id=2, 618 match drop_shipper_id=221.\n \nWhat are the stats settings for these tables ?Whatever the defaults are. \n\nHTH,\n\nGreg WIlliamson\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 5 Apr 2013 19:56:02 -0700",
"msg_from": "Joe Van Dyk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow joins?"
},
{
"msg_contents": "\n\nJoe --\n\n>________________________________\n> From: Joe Van Dyk <[email protected]>\n>To: Greg Williamson <[email protected]> \n>Cc: \"[email protected]\" <[email protected]> \n>Sent: Friday, April 5, 2013 7:56 PM\n>Subject: Re: [PERFORM] slow joins?\n> \n>\n>On Fri, Apr 5, 2013 at 6:54 PM, Greg Williamson <[email protected]> wrote:\n>\n>Joe --\n>>\n>>>________________________________\n>>> From: Joe Van Dyk <[email protected]>\n>>>To: [email protected]\n>>>Sent: Friday, April 5, 2013 6:42 PM\n>>>Subject: Re: [PERFORM] slow joins?\n>>\n>>>\n>>>\n>>>(https://gist.github.com/joevandyk/df0df703f3fda6d14ae1/raw/c15cae813913b7f8c35b24b467a0c732c0100d79/gistfile1.txt shows a non-wrapped version of the queries and plan)\n>>>\n>>>\n>>>\n>>>\n>>>On Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk <[email protected]> wrote:\n>>>\n>>>On 9.2.4, running two identical queries except for the value of a column in the WHERE clause. Postgres is picking very different query plans, the first is much slower than the second.\n>>>>\n>>>>\n>>>>Any ideas on how I can speed this up? I have btree indexes for all the columns used in the query.\n>>>>\n>>>>explain analyze \n>>>>SELECT COUNT(*) \n>>>>FROM purchased_items pi \n>>>>inner join line_items li on li.id = pi.line_item_id \n>>>>inner join products on products.id = li.product_id \n>>>>WHERE products.drop_shipper_id = 221;\n>>>>\n>>>> Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual time=2425.225..2425.225 rows=1 loops=1)\n>>>> -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0) (actual time=726.612..2424.206 rows=8413 loops=1)\n>>>> Hash Cond: (pi.line_item_id = li.id)\n>>>> -> Seq Scan on purchased_items pi (cost=0.00..60912.39 rows=3724639 width=4) (actual time=0.008..616.812 rows=3724639 loops=1)\n>>>> -> Hash (cost=77937.19..77937.19 rows=56499 width=4) (actual time=726.231..726.231 rows=8178 loops=1)\n>>>> Buckets: 4096 Batches: 4 Memory Usage: 73kB\n>>>> -> Hash Join (cost=1684.33..77937.19 rows=56499 width=4) (actual time=1.270..723.222 rows=8178 loops=1)\n>>>> Hash Cond: (li.product_id = products.id)\n>>>> -> Seq Scan on line_items li (cost=0.00..65617.18 rows=2685518 width=8) (actual time=0.081..392.926 rows=2685499 loops=1)\n>>>> -> Hash (cost=1676.60..1676.60 rows=618 width=4) (actual time=0.835..0.835 rows=618 loops=1)\n>>>> Buckets: 1024 Batches: 1 Memory Usage: 22kB\n>>>> -> Bitmap Heap Scan on products (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752 rows=618 loops=1)\n>>>> Recheck Cond: (drop_shipper_id = 221)\n>>>> -> Bitmap Index Scan on index_products_on_drop_shipper_id (cost=0.00..12.92 rows=618 width=0) (actual time=0.125..0.125 rows=618 loops=1)\n>>>> Index Cond: (drop_shipper_id = 221)\n>>>> Total runtime: 2425.302 ms\n>>>>\n>>>>\n>>>>explain analyze \n>>>>SELECT COUNT(*) \n>>>>FROM purchased_items pi \n>>>>inner join line_items li on li.id = pi.line_item_id \n>>>>inner join products on products.id = li.product_id \n>>>>WHERE products.drop_shipper_id = 2; \n>>>> \n>>>>\n>>>> Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual time=0.906..0.906 rows=1 loops=1)\n>>>> -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0) (actual time=0.029..0.877 rows=172 loops=1)\n>>>> -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4) (actual time=0.021..0.383 rows=167 loops=1)\n>>>> -> Index Scan using index_products_on_drop_shipper_id on products (cost=0.00..80.41 rows=19 width=4) (actual time=0.010..0.074 rows=70 loops=1)\n>>>> Index Cond: (drop_shipper_id = 2)\n>>>> -> Index Scan using index_line_items_on_product_id on line_items li (cost=0.00..835.70 rows=279 width=8) (actual time=0.002..0.004 rows=2 loops=70)\n>>>> Index Cond: (product_id = products.id)\n>>>> -> Index Only Scan using purchased_items_line_item_id_idx on purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual time=0.002..0.003 rows=1 loops=167)\n>>>> Index Cond: (line_item_id = li.id)\n>>>> Heap Fetches: 5\n>>>> Total runtime: 0.955 ms\n>>>>(11 rows)\n>>>>\n>>>\n>>\n>>\n>>Does drop_shipper+id have a much larger number of rows which is making the scanner want to avoid an indexed scan or otherwise prefer a sequential scan on products and on line_items ?\n>>\n>\n>\n>Assuming you mean products.drop_shipper_id? There are more rows matched for the first one vs the second one. \n>70 products rows match drop_shipper_id=2, 618 match drop_shipper_id=221.\n> \n>What are the stats settings for these tables ?\n>>\n>\n>\n>Whatever the defaults are.\n> \n\nI mis-pasted the tables -- both line_items and purchased items are getting sequential scans for the relevant rows; it is possible that that there's enough difference to tip the planner to use sequential scans.\n\nYou might try increasing the stats being collected on those two tables, run analyze on all the tables in the query, and try it again.\n\nGW\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 5 Apr 2013 20:32:56 -0700 (PDT)",
"msg_from": "Greg Williamson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow joins?"
},
{
"msg_contents": "try to increase cpu_tuple_cost to 0.1\n\nOn 04/06/2013 03:50, Joe Van Dyk wrote:\n> If I disable sequential scans, hash joins, and merge joins, the query \n> plans become the same and performance on the first slow one is much \n> improved.\n>\n> Is there something else I can do to avoid this problem?\n>\n> below also at \n> https://gist.github.com/joevandyk/34e31b3ad5cccb730a50/raw/8081a4298ba50ac93a86df97c1d0aae482ee7d2d/gistfile1.txt\n>\n> Aggregate (cost=869360.53..869360.54 rows=1 width=0) (actual \n> time=103.102..103.102 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..869164.63 rows=78360 width=0) (actual \n> time=0.253..101.708 rows=8413 loops=1)\n> -> Nested Loop (cost=0.00..438422.95 rows=56499 width=4) \n> (actual time=0.157..51.766 rows=8178 loops=1)\n> -> Index Scan using index_products_on_drop_shipper_id \n> on products (cost=0.00..2312.56 rows=618 width=4) (actual \n> time=0.087..6.318 rows=618 loops=1)\n> Index Cond: (drop_shipper_id = 221)\n> -> Index Scan using index_line_items_on_product_id on \n> line_items li (cost=0.00..702.89 rows=279 width=8) (actual \n> time=0.010..0.069 rows=13 loops=618)\n> Index Cond: (product_id = products.id \n> <http://products.id>)\n> -> Index Only Scan using purchased_items_line_item_id_idx on \n> purchased_items pi (cost=0.00..7.60 rows=2 width=4) (actual \n> time=0.005..0.005 rows=1 loops=8178)\n> Index Cond: (line_item_id = li.id <http://li.id>)\n> Heap Fetches: 144\n> Total runtime: 103.442 ms\n> (11 rows)\n>\n>\n>\n> On Fri, Apr 5, 2013 at 6:38 PM, Joe Van Dyk <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> On 9.2.4, running two identical queries except for the value of a\n> column in the WHERE clause. Postgres is picking very different\n> query plans, the first is much slower than the second.\n>\n> Any ideas on how I can speed this up? I have btree indexes for\n> all the columns used in the query.\n>\n> explain analyze\n> SELECT COUNT(*)\n> FROM purchased_items pi\n> inner join line_items li on li.id <http://li.id> = pi.line_item_id\n> inner join products on products.id <http://products.id> =\n> li.product_id\n> WHERE products.drop_shipper_id = 221;\n>\n> Aggregate (cost=193356.31..193356.32 rows=1 width=0) (actual\n> time=2425.225..2425.225 rows=1 loops=1)\n> -> Hash Join (cost=78864.43..193160.41 rows=78360 width=0)\n> (actual time=726.612..2424.206 rows=8413 loops=1)\n> Hash Cond: (pi.line_item_id = li.id <http://li.id>)\n> -> Seq Scan on purchased_items pi (cost=0.00..60912.39\n> rows=3724639 width=4) (actual time=0.008..616.812 rows=3724639\n> loops=1)\n> -> Hash (cost=77937.19..77937.19 rows=56499 width=4)\n> (actual time=726.231..726.231 rows=8178 loops=1)\n> Buckets: 4096 Batches: 4 Memory Usage: 73kB\n> -> Hash Join (cost=1684.33..77937.19 rows=56499\n> width=4) (actual time=1.270..723.222 rows=8178 loops=1)\n> Hash Cond: (li.product_id = products.id\n> <http://products.id>)\n> -> Seq Scan on line_items li\n> (cost=0.00..65617.18 rows=2685518 width=8) (actual\n> time=0.081..392.926 rows=2685499 loops=1)\n> -> Hash (cost=1676.60..1676.60 rows=618\n> width=4) (actual time=0.835..0.835 rows=618 loops=1)\n> Buckets: 1024 Batches: 1 Memory\n> Usage: 22kB\n> -> Bitmap Heap Scan on products\n> (cost=13.07..1676.60 rows=618 width=4) (actual time=0.185..0.752\n> rows=618 loops=1)\n> Recheck Cond: (drop_shipper_id = 221)\n> -> Bitmap Index Scan on\n> index_products_on_drop_shipper_id (cost=0.00..12.92 rows=618\n> width=0) (actual time=0.125..0.125 rows=618 loops=1)\n> Index Cond:\n> (drop_shipper_id = 221)\n> Total runtime: 2425.302 ms\n>\n>\n> explain analyze\n> SELECT COUNT(*)\n> FROM purchased_items pi\n> inner join line_items li on li.id <http://li.id> = pi.line_item_id\n> inner join products on products.id <http://products.id> =\n> li.product_id\n> WHERE products.drop_shipper_id = 2;\n>\n>\n> Aggregate (cost=29260.40..29260.41 rows=1 width=0) (actual\n> time=0.906..0.906 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..29254.38 rows=2409 width=0)\n> (actual time=0.029..0.877 rows=172 loops=1)\n> -> Nested Loop (cost=0.00..16011.70 rows=1737 width=4)\n> (actual time=0.021..0.383 rows=167 loops=1)\n> -> Index Scan using\n> index_products_on_drop_shipper_id on products (cost=0.00..80.41\n> rows=19 width=4) (actual time=0.010..0.074 rows=70 loops=1)\n> Index Cond: (drop_shipper_id = 2)\n> -> Index Scan using index_line_items_on_product_id\n> on line_items li (cost=0.00..835.70 rows=279 width=8) (actual\n> time=0.002..0.004 rows=2 loops=70)\n> Index Cond: (product_id = products.id\n> <http://products.id>)\n> -> Index Only Scan using\n> purchased_items_line_item_id_idx on purchased_items pi\n> (cost=0.00..7.60 rows=2 width=4) (actual time=0.002..0.003 rows=1\n> loops=167)\n> Index Cond: (line_item_id = li.id <http://li.id>)\n> Heap Fetches: 5\n> Total runtime: 0.955 ms\n> (11 rows)\n>\n>\n\n\n\n\n\n\n\ntry to increase cpu_tuple_cost to 0.1\n\n On 04/06/2013 03:50, Joe Van Dyk wrote:\n\n\nIf I disable sequential scans, hash joins, and\n merge joins, the query plans become the same and performance on\n the first slow one is much improved. \n \n\nIs there something else I can do to avoid this problem?\n\n\nbelow also at https://gist.github.com/joevandyk/34e31b3ad5cccb730a50/raw/8081a4298ba50ac93a86df97c1d0aae482ee7d2d/gistfile1.txt\n\n\n\n Aggregate (cost=869360.53..869360.54 rows=1 width=0)\n (actual time=103.102..103.102 rows=1 loops=1)\n -> Nested Loop (cost=0.00..869164.63 rows=78360\n width=0) (actual time=0.253..101.708 rows=8413 loops=1)\n -> Nested Loop (cost=0.00..438422.95\n rows=56499 width=4) (actual time=0.157..51.766 rows=8178\n loops=1)\n -> Index Scan using\n index_products_on_drop_shipper_id on products\n (cost=0.00..2312.56 rows=618 width=4) (actual\n time=0.087..6.318 rows=618 loops=1)\n Index Cond: (drop_shipper_id =\n 221)\n -> Index Scan using\n index_line_items_on_product_id on line_items li\n (cost=0.00..702.89 rows=279 width=8) (actual\n time=0.010..0.069 rows=13 loops=618)\n Index Cond: (product_id = products.id)\n -> Index Only Scan using\n purchased_items_line_item_id_idx on purchased_items pi\n (cost=0.00..7.60 rows=2 width=4) (actual\n time=0.005..0.005 rows=1 loops=8178)\n Index Cond: (line_item_id = li.id)\n Heap Fetches: 144\n Total runtime: 103.442 ms\n(11 rows)\n\n\n\n\n\n\n\nOn Fri, Apr 5, 2013 at 6:38 PM, Joe Van\n Dyk <[email protected]>\n wrote:\n\nOn 9.2.4, running two identical queries\n except for the value of a column in the WHERE clause.\n Postgres is picking very different query plans, the first\n is much slower than the second.\n \n\nAny ideas on how I can speed this up? I have btree\n indexes for all the columns used in the query.\n\n explain analyze \n \n SELECT COUNT(*) \n \n FROM purchased_items pi \n \n inner join line_items li on li.id =\n pi.line_item_id \n \n inner join products on products.id\n = li.product_id \n \n WHERE products.drop_shipper_id = 221;\n\n Aggregate (cost=193356.31..193356.32 rows=1 width=0)\n (actual time=2425.225..2425.225 rows=1 loops=1)\n -> Hash Join (cost=78864.43..193160.41\n rows=78360 width=0) (actual time=726.612..2424.206\n rows=8413 loops=1)\n Hash Cond: (pi.line_item_id = li.id)\n -> Seq Scan on purchased_items pi\n (cost=0.00..60912.39 rows=3724639 width=4) (actual\n time=0.008..616.812 rows=3724639 loops=1)\n -> Hash (cost=77937.19..77937.19\n rows=56499 width=4) (actual time=726.231..726.231\n rows=8178 loops=1)\n Buckets: 4096 Batches: 4 Memory Usage:\n 73kB\n -> Hash Join (cost=1684.33..77937.19\n rows=56499 width=4) (actual time=1.270..723.222\n rows=8178 loops=1)\n Hash Cond: (li.product_id = products.id)\n -> Seq Scan on line_items li\n (cost=0.00..65617.18 rows=2685518 width=8) (actual\n time=0.081..392.926 rows=2685499 loops=1)\n -> Hash (cost=1676.60..1676.60\n rows=618 width=4) (actual time=0.835..0.835 rows=618\n loops=1)\n Buckets: 1024 Batches: 1\n Memory Usage: 22kB\n -> Bitmap Heap Scan on\n products (cost=13.07..1676.60 rows=618 width=4) (actual\n time=0.185..0.752 rows=618 loops=1)\n Recheck Cond:\n (drop_shipper_id = 221)\n -> Bitmap Index\n Scan on index_products_on_drop_shipper_id\n (cost=0.00..12.92 rows=618 width=0) (actual\n time=0.125..0.125 rows=618 loops=1)\n Index Cond:\n (drop_shipper_id = 221)\n Total runtime: 2425.302 ms\n\n\n explain analyze \n \n SELECT COUNT(*) \n \n FROM purchased_items pi \n \n inner join line_items li on li.id =\n pi.line_item_id \n \n inner join products on products.id\n = li.product_id \n \n WHERE products.drop_shipper_id = 2; \n \n \n \n \n\n Aggregate (cost=29260.40..29260.41 rows=1 width=0)\n (actual time=0.906..0.906 rows=1 loops=1)\n -> Nested Loop (cost=0.00..29254.38 rows=2409\n width=0) (actual time=0.029..0.877 rows=172 loops=1)\n -> Nested Loop (cost=0.00..16011.70\n rows=1737 width=4) (actual time=0.021..0.383 rows=167\n loops=1)\n -> Index Scan using\n index_products_on_drop_shipper_id on products\n (cost=0.00..80.41 rows=19 width=4) (actual\n time=0.010..0.074 rows=70 loops=1)\n Index Cond: (drop_shipper_id = 2)\n -> Index Scan using\n index_line_items_on_product_id on line_items li\n (cost=0.00..835.70 rows=279 width=8) (actual\n time=0.002..0.004 rows=2 loops=70)\n Index Cond: (product_id = products.id)\n -> Index Only Scan using\n purchased_items_line_item_id_idx on purchased_items pi\n (cost=0.00..7.60 rows=2 width=4) (actual\n time=0.002..0.003 rows=1 loops=167)\n Index Cond: (line_item_id = li.id)\n Heap Fetches: 5\n Total runtime: 0.955 ms\n (11 rows)",
"msg_date": "Sat, 06 Apr 2013 12:53:37 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow joins?"
},
{
"msg_contents": "Julien Cigar <[email protected]> wrote:\n\n> try to increase cpu_tuple_cost to 0.1\n\nI agree that's on the right track, but possibly an overly blunt\ntool for the job. The following settings are likely to need\nadjustment, IMO:\n\neffective_cache_size: People often set this to somewhere in the\nrange of 50% to 75% of the RAM on the machine. This setting does\nnot allocate RAM, but tells the planner how likely it is to find\nthings in cache for, say, repeated index access. A higher setting\nmakes the random access involved in index scans seem like less of a\nproblem.\n\nrandom_page_cost: You seem to have a very high cache hit ratio,\nbetween shared_buffers and the OS cache. To model this you should\ndecrease random_page_cost to something just above seq_page_cost or\nequal to it. To reflect the relatively low cost of reading a page\nfrom the OS cache (compared to actually reading from disk) you\nmight want to reduce both of these below 1. 0.1 is a not-uncommon\nsetting for instances with the active portion of the database\nwell-cached.\n\ncpu_tuple_cost: I always raise this; I think our default is just\ntoo low to accurately model the cost of reading a row, compared to\nthe cost factors used for other things. In combination with the\nabove changes I've never had to go beyond 0.03 to get a good plan. \nI've pushed it to 0.05 to see if that put me near a tipping point\nfor a bad plan, and saw no ill effects. I've never tried higher\nthan 0.05, so I can't speak to that.\n\nIn any event, your current cost settings aren't accurately modeling\nactual costs in your environment for your workload. You need to\nadjust them.\n\nOne of the estimates was off, so increasing the statistics sample\nsize might help, but I suspect that you need to make adjustments\nlike the above in any event.\n\n-- \nKevin Grittner\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 6 Apr 2013 07:22:29 -0700 (PDT)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow joins?"
},
{
"msg_contents": "On 04/06/2013 16:22, Kevin Grittner wrote:\n> Julien Cigar <[email protected]> wrote:\n>\n>> try to increase cpu_tuple_cost to 0.1\n> I agree that's on the right track, but possibly an overly blunt\n> tool for the job. The following settings are likely to need\n> adjustment, IMO:\n>\n> effective_cache_size: People often set this to somewhere in the\n> range of 50% to 75% of the RAM on the machine. This setting does\n> not allocate RAM, but tells the planner how likely it is to find\n> things in cache for, say, repeated index access. A higher setting\n> makes the random access involved in index scans seem like less of a\n> problem.\n\nI agree that the very first thing to check is effective_cache_size\n\n> random_page_cost: You seem to have a very high cache hit ratio,\n> between shared_buffers and the OS cache. To model this you should\n> decrease random_page_cost to something just above seq_page_cost or\n> equal to it. To reflect the relatively low cost of reading a page\n> from the OS cache (compared to actually reading from disk) you\n> might want to reduce both of these below 1. 0.1 is a not-uncommon\n> setting for instances with the active portion of the database\n> well-cached.\n\nI would first raise cpu_tuple_cost rather than touch random_page_cost. \nRaising cpu_tuple_cost is\na more \"fine-grained method\" for discouraging seqscans than \nrandom_page_cost is.\n\n\n> cpu_tuple_cost: I always raise this; I think our default is just\n> too low to accurately model the cost of reading a row, compared to\n> the cost factors used for other things. In combination with the\n> above changes I've never had to go beyond 0.03 to get a good plan.\n> I've pushed it to 0.05 to see if that put me near a tipping point\n> for a bad plan, and saw no ill effects. I've never tried higher\n> than 0.05, so I can't speak to that.\n\nYep, default cpu_tuple_cost is just too low ..\n\n> In any event, your current cost settings aren't accurately modeling\n> actual costs in your environment for your workload. You need to\n> adjust them.\n>\n> One of the estimates was off, so increasing the statistics sample\n> size might help, but I suspect that you need to make adjustments\n> like the above in any event.\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 06 Apr 2013 16:43:50 +0200",
"msg_from": "Julien Cigar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow joins?"
}
] |
[
{
"msg_contents": "Hi,\n\nCould someone tell m how to measure postgres memory usage.\nIs there a pg_* view to measure?\n\nThank you\nNikT\n\nHi,Could someone tell m how to measure postgres memory usage.Is there a pg_* view to measure?Thank youNikT",
"msg_date": "Sat, 6 Apr 2013 21:59:16 -0700",
"msg_from": "Nik Tek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Find how much memory is postgres using"
},
{
"msg_contents": "Hi,\nas you know 'memory usage' is smt continuously changes in time and not\ndirectly related to pg also related to your resources , you can set a\nspecific limit if you want.\n\n\n\n2013/4/7 Nik Tek <[email protected]>\n\n> Hi,\n>\n> Could someone tell m how to measure postgres memory usage.\n> Is there a pg_* view to measure?\n>\n> Thank you\n> NikT\n>\n\nHi,as you know 'memory usage' is smt continuously changes in time and not directly related to pg also related to your resources , you can set a specific limit if you want.\n2013/4/7 Nik Tek <[email protected]>\nHi,Could someone tell m how to measure postgres memory usage.Is there a pg_* view to measure?Thank youNikT",
"msg_date": "Sun, 7 Apr 2013 10:15:39 +0300",
"msg_from": "=?ISO-8859-1?Q?Yetkin_=D6zt=FCrk?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Find how much memory is postgres using"
},
{
"msg_contents": "On Sat, Apr 06, 2013 at 09:59:16PM -0700, Nik Tek wrote:\n> Could someone tell m how to measure postgres memory usage.\n> Is there a pg_* view to measure?\n\nhttp://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/\n\nBest regards,\n\ndepesz\n\n-- \nThe best thing about modern society is how easy it is to avoid contact with it.\n http://depesz.com/\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Sun, 7 Apr 2013 11:49:37 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Find how much memory is postgres using"
},
{
"msg_contents": "Nik Tek wrote:\n> Could someone tell m how to measure postgres memory usage.\n> Is there a pg_* view to measure?\n\nNo, you have to use operating system tools, like \"ps\" on UNIX.\n\nYours,\nLaurenz Albe\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Mon, 8 Apr 2013 07:13:21 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Find how much memory is postgres using"
},
{
"msg_contents": "On Sun, Apr 07, 2013 at 09:27:42PM -0700, Nik Tek wrote:\n> Thank you Depesz!\n> But I have a naive question, why isn't a straight forword approach for\n> postgres, unlike Oracle or MSSQL?\n\nNo idea. And how do you get memory usage in Oracle or MSSQL?\n\nBest regards,\n\ndepesz\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 8 Apr 2013 12:18:03 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Find how much memory is postgres using"
},
{
"msg_contents": "--For MSSQL\nselect\n(select cntr_value\n from sys.dm_os_performance_counters\n where object_name like '%Memory Manager%' and counter_name like\n'Maximum Workspace Memory (KB)%') as Maximum_Workspace_Memory_KB,\n (select cntr_value\n from sys.dm_os_performance_counters\n where object_name like '%Memory Manager%' and counter_name like\n'Target Server Memory (KB)%') as Target_Server_Memory_KB,\n(select cntr_value\n from sys.dm_os_performance_counters\n where object_name like '%Memory Manager%' and counter_name like\n'Maximum Workspace Memory (KB)%') * 100.0\n /\n (select cntr_value\n from sys.dm_os_performance_counters\n where object_name like '%Memory Manager%' and counter_name like\n'Target Server Memory (KB)%') as Ratio\n\n-- Oracle\nSELECT sum(bytes)/1024/1024\nFROM v$sgastat;\n\nThank you\nNik\n\n\n\nOn Mon, Apr 8, 2013 at 3:18 AM, hubert depesz lubaczewski <[email protected]\n> wrote:\n\n> On Sun, Apr 07, 2013 at 09:27:42PM -0700, Nik Tek wrote:\n> > Thank you Depesz!\n> > But I have a naive question, why isn't a straight forword approach for\n> > postgres, unlike Oracle or MSSQL?\n>\n> No idea. And how do you get memory usage in Oracle or MSSQL?\n>\n> Best regards,\n>\n> depesz\n>\n>\n\n--For MSSQL select (select cntr_value from sys.dm_os_performance_counters where object_name like '%Memory Manager%' and counter_name like 'Maximum Workspace Memory (KB)%') as Maximum_Workspace_Memory_KB,\n (select cntr_value from sys.dm_os_performance_counters where object_name like '%Memory Manager%' and counter_name like 'Target Server Memory (KB)%') as Target_Server_Memory_KB,\n (select cntr_value from sys.dm_os_performance_counters where object_name like '%Memory Manager%' and counter_name like 'Maximum Workspace Memory (KB)%') * 100.0 \n / (select cntr_value from sys.dm_os_performance_counters where object_name like '%Memory Manager%' and counter_name like 'Target Server Memory (KB)%') as Ratio\n-- OracleSELECT sum(bytes)/1024/1024FROM v$sgastat;Thank youNik\nOn Mon, Apr 8, 2013 at 3:18 AM, hubert depesz lubaczewski <[email protected]> wrote:\nOn Sun, Apr 07, 2013 at 09:27:42PM -0700, Nik Tek wrote:\n> Thank you Depesz!\n> But I have a naive question, why isn't a straight forword approach for\n> postgres, unlike Oracle or MSSQL?\n\nNo idea. And how do you get memory usage in Oracle or MSSQL?\n\nBest regards,\n\ndepesz",
"msg_date": "Tue, 9 Apr 2013 11:24:22 -0700",
"msg_from": "Nik Tek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Find how much memory is postgres using"
},
{
"msg_contents": "On Tue, Apr 09, 2013 at 11:24:22AM -0700, Nik Tek wrote:\n> --For MSSQL\n> select\n...\n> -- Oracle\n...\n\nWell, the answer is simple - in Microsoft and Oracle, someone wrote such\nviews/functions. In Pg - not. You are welcome to provide a patch,\nthough :)\n\nBest regards,\n\ndepesz\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 Apr 2013 20:34:07 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Find how much memory is postgres using"
},
{
"msg_contents": "Hi Depesz,\n\n--Here is better one for Oracle by sga/pga.\nSELECT DECODE (GROUPING (nm), 1, 'total', nm) nm,\n ROUND (SUM (val / 1024 / 1024)) MB\n FROM (SELECT 'sga' nm, SUM (VALUE) val FROM v$sga\n UNION ALL\n SELECT 'pga', SUM (VALUE)\n FROM v$sysstat\n WHERE name = 'session pga memory')\nGROUP BY ROLLUP (nm);\n\nSure, I will take up the task, will send you the script once it is ready,\nso you can bless it. :)\n\nRegards\nNik\n\n\n\n\nOn Tue, Apr 9, 2013 at 11:34 AM, hubert depesz lubaczewski <\[email protected]> wrote:\n\n> On Tue, Apr 09, 2013 at 11:24:22AM -0700, Nik Tek wrote:\n> > --For MSSQL\n> > select\n> ...\n> > -- Oracle\n> ...\n>\n> Well, the answer is simple - in Microsoft and Oracle, someone wrote such\n> views/functions. In Pg - not. You are welcome to provide a patch,\n> though :)\n>\n> Best regards,\n>\n> depesz\n>\n>\n\nHi Depesz,--Here is better one for Oracle by sga/pga.SELECT DECODE (GROUPING (nm), 1, 'total', nm) nm, ROUND (SUM (val / 1024 / 1024)) MB\n FROM (SELECT 'sga' nm, SUM (VALUE) val FROM v$sga UNION ALL SELECT 'pga', SUM (VALUE) FROM v$sysstat WHERE name = 'session pga memory')\nGROUP BY ROLLUP (nm);Sure, I will take up the task, will send you the script once it is ready, so you can bless it. :)Regards\nNikOn Tue, Apr 9, 2013 at 11:34 AM, hubert depesz lubaczewski <[email protected]> wrote:\nOn Tue, Apr 09, 2013 at 11:24:22AM -0700, Nik Tek wrote:\n\n> --For MSSQL\n> select\n...\n> -- Oracle\n...\n\nWell, the answer is simple - in Microsoft and Oracle, someone wrote such\nviews/functions. In Pg - not. You are welcome to provide a patch,\nthough :)\n\nBest regards,\n\ndepesz",
"msg_date": "Tue, 9 Apr 2013 11:42:19 -0700",
"msg_from": "Nik Tek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Find how much memory is postgres using"
}
] |
[
{
"msg_contents": "I'm encountering an issue where PG 9.2.4 (we also see this with 9.2.3) \nis picking a plan involving a bitmap heap scan that turns out to be much \nslower than a nested-loop plan using indexes.\n\nThe planner picks the hashjoin plan by default (see attached files)\n\nBitmap Heap Scan on public.table_b_2 b (cost=172635.99..9800225.75 \nrows=8435754 width=10) (actual t\nime=9132.194..1785196.352 rows=9749680 loops=1)\n Recheck Cond: ((b.organization_id = 3) AND \n(b.year = 2013) AND (b.month = 3))\n Rows Removed by Index Recheck: 313195667\n Filter: (b.product_id = 2)\n\nIs the part that seems be causing the problem (or at least taking most \nof the time, other than the final aggregation)\n\nIf I set enable_hashjoin=false and enable_mergejoin=false I get the \nnestedloop join plan.\n\ntable_b is 137 GB plus indexes each on is around 43 GB\ntable_a is 20 GB\n\nrandom_page_cost = 2.0\neffective_cache_size = 3500MB\ncpu_tuple_cost = 0.01\ncpu_index_tuple_cost = 0.005\ncpu_operator_cost = 0.0025\nwork_mem = 64MB\nshared_buffers = 300MB (for this output, I've also had it at 2GB)\n\nIf I bump cpu_tuple_cost to the 10-20 range it will pick the nested loop \njoin for some date ranges but not all. cpu_tuple_cost of 20 doesn't \nsound like an sane value.\n\nThis database used to run 8.3 where it picked the nested-loop join. We \nused pg_upgrade to migrate to 9.2\n\nAny ideas why the bitmap heap scan is much slower than the planner expects?\n\nSteve\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 10 Apr 2013 09:49:55 -0400",
"msg_from": "Steve Singer <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On Wed, Apr 10, 2013 at 09:49:55AM -0400, Steve Singer wrote:\n> I'm encountering an issue where PG 9.2.4 (we also see this with\n> 9.2.3) is picking a plan involving a bitmap heap scan that turns out\n> to be much slower than a nested-loop plan using indexes.\n> \n> The planner picks the hashjoin plan by default (see attached files)\n> \n> Bitmap Heap Scan on public.table_b_2 b (cost=172635.99..9800225.75\n> rows=8435754 width=10) (actual t\n> ime=9132.194..1785196.352 rows=9749680 loops=1)\n> Recheck Cond: ((b.organization_id = 3)\n> AND (b.year = 2013) AND (b.month = 3))\n> Rows Removed by Index Recheck: 313195667\n> Filter: (b.product_id = 2)\n> \n> Is the part that seems be causing the problem (or at least taking\n> most of the time, other than the final aggregation)\n> \n> If I set enable_hashjoin=false and enable_mergejoin=false I get the\n> nestedloop join plan.\n> \n> table_b is 137 GB plus indexes each on is around 43 GB\n> table_a is 20 GB\n> \n> random_page_cost = 2.0\n> effective_cache_size = 3500MB\n> cpu_tuple_cost = 0.01\n> cpu_index_tuple_cost = 0.005\n> cpu_operator_cost = 0.0025\n> work_mem = 64MB\n> shared_buffers = 300MB (for this output, I've also had it at 2GB)\n> \n> If I bump cpu_tuple_cost to the 10-20 range it will pick the nested\n> loop join for some date ranges but not all. cpu_tuple_cost of 20\n> doesn't sound like an sane value.\n> \n> This database used to run 8.3 where it picked the nested-loop join.\n> We used pg_upgrade to migrate to 9.2\n> \n> Any ideas why the bitmap heap scan is much slower than the planner expects?\n> \n> Steve\n\nHi Steve,\n\nThe one thing that stands out to me is that you are working with 200GB of\ndata on a machine with 4-8GB of ram and you have the random_page_cost set\nto 2.0. That is almost completely uncached and I would expect a value of\n10 or more to be closer to reality.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Apr 2013 08:56:57 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On 13-04-10 09:56 AM, [email protected] wrote:\n> On Wed, Apr 10, 2013 at 09:49:55AM -0400, Steve Singer wrote:\n\n>\n> Hi Steve,\n>\n> The one thing that stands out to me is that you are working with 200GB of\n> data on a machine with 4-8GB of ram and you have the random_page_cost set\n> to 2.0. That is almost completely uncached and I would expect a value of\n> 10 or more to be closer to reality.\n\nSetting random_page_cost to 15 makes the planner choose the nested-loop \nplan (at least the date range I tried).\n\nI thought that the point of effective cache size was to tell the planner \nhigh likely it is for a random page to be in cache. With 200GB of data \nfor this query and an effective cache size of 3.5 GB I would have \nexpected that to be accounted for.\n\n\n\n>\n> Regards,\n> Ken\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Apr 2013 11:56:32 -0400",
"msg_from": "Steve Singer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On Wed, Apr 10, 2013 at 11:56:32AM -0400, Steve Singer wrote:\n> On 13-04-10 09:56 AM, [email protected] wrote:\n> >On Wed, Apr 10, 2013 at 09:49:55AM -0400, Steve Singer wrote:\n> \n> >\n> >Hi Steve,\n> >\n> >The one thing that stands out to me is that you are working with 200GB of\n> >data on a machine with 4-8GB of ram and you have the random_page_cost set\n> >to 2.0. That is almost completely uncached and I would expect a value of\n> >10 or more to be closer to reality.\n> \n> Setting random_page_cost to 15 makes the planner choose the\n> nested-loop plan (at least the date range I tried).\n> \n> I thought that the point of effective cache size was to tell the\n> planner high likely it is for a random page to be in cache. With\n> 200GB of data for this query and an effective cache size of 3.5 GB I\n> would have expected that to be accounted for.\n> \nFor random_page_cost to be that low, the database would need to be\nmostly cached. 3.5GB is almost 100X too small to do that unless your\nquery exhibits a large amount of locality of reference. Values for\nrandom_page_cost between 10 and 20 are very reasonable for disk-bound\nI/O scenarios.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Apr 2013 12:49:52 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On Wed, Apr 10, 2013 at 6:49 AM, Steve Singer <[email protected]>wrote:\n\n> I'm encountering an issue where PG 9.2.4 (we also see this with 9.2.3) is\n> picking a plan involving a bitmap heap scan that turns out to be much\n> slower than a nested-loop plan using indexes.\n>\n> The planner picks the hashjoin plan by default (see attached files)\n>\n> Bitmap Heap Scan on public.table_b_2 b (cost=172635.99..9800225.75\n> rows=8435754 width=10) (actual t\n> ime=9132.194..1785196.352 rows=9749680 loops=1)\n> Recheck Cond: ((b.organization_id = 3) AND\n> (b.year = 2013) AND (b.month = 3))\n> Rows Removed by Index Recheck: 313195667\n> Filter: (b.product_id = 2)\n>\n\nI think the index recheck means your bitmap is overflowing (i.e. needing\nmore space than work_mem) and so keeping only the pages which have at least\none match, which means all rows in those pages need to be rechecked. How\nmany rows does the table have? You might be essentially doing a seq scan,\nbut with the additional overhead of the bitmap machinery. Could you do\n\"explain (analyze,buffers)\", preferably with track_io_timing set to on?\n\n Cheers,\n\nJeff\n\nOn Wed, Apr 10, 2013 at 6:49 AM, Steve Singer <[email protected]> wrote:\nI'm encountering an issue where PG 9.2.4 (we also see this with 9.2.3) is picking a plan involving a bitmap heap scan that turns out to be much slower than a nested-loop plan using indexes.\n\nThe planner picks the hashjoin plan by default (see attached files)\n\nBitmap Heap Scan on public.table_b_2 b (cost=172635.99..9800225.75 rows=8435754 width=10) (actual t\nime=9132.194..1785196.352 rows=9749680 loops=1)\n Recheck Cond: ((b.organization_id = 3) AND (b.year = 2013) AND (b.month = 3))\n Rows Removed by Index Recheck: 313195667\n Filter: (b.product_id = 2)I think the index recheck means your bitmap is overflowing (i.e. needing more space than work_mem) and so keeping only the pages which have at least one match, which means all rows in those pages need to be rechecked. How many rows does the table have? You might be essentially doing a seq scan, but with the additional overhead of the bitmap machinery. Could you do \"explain (analyze,buffers)\", preferably with track_io_timing set to on?\n Cheers,Jeff",
"msg_date": "Wed, 10 Apr 2013 11:06:16 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On Wed, Apr 10, 2013 at 8:56 AM, Steve Singer <[email protected]>wrote:\n\n> On 13-04-10 09:56 AM, [email protected] wrote:\n>\n>> On Wed, Apr 10, 2013 at 09:49:55AM -0400, Steve Singer wrote:\n>>\n>\n>\n>> Hi Steve,\n>>\n>> The one thing that stands out to me is that you are working with 200GB of\n>> data on a machine with 4-8GB of ram and you have the random_page_cost set\n>> to 2.0. That is almost completely uncached and I would expect a value of\n>> 10 or more to be closer to reality.\n>>\n>\n> Setting random_page_cost to 15 makes the planner choose the nested-loop\n> plan (at least the date range I tried).\n>\n> I thought that the point of effective cache size was to tell the planner\n> high likely it is for a random page to be in cache.\n\n\n\ne_c_s tells it how likely it is to still be in cache the second (and\nsubsequent) time the page is visited during the *same query*. It doesn't\ntell it how likely it is to be in cache the first time it is needed in a\ngiven query. (Also, e_c_s is irrelevant for bitmap scans, as they\ninherently hit every block only once)\n\n\nAlso, it doesn't tell how expensive it is to bring it into cache when it is\nneeded. That is what random_page_cost is for. If you tell that those\nfetches are going to be cheap, then it doesn't matter so much how many of\nthem it is going to have to do.\n\nCheers,\n\nJeff\n\nOn Wed, Apr 10, 2013 at 8:56 AM, Steve Singer <[email protected]> wrote:\nOn 13-04-10 09:56 AM, [email protected] wrote:\n\nOn Wed, Apr 10, 2013 at 09:49:55AM -0400, Steve Singer wrote:\n\n\n\n\nHi Steve,\n\nThe one thing that stands out to me is that you are working with 200GB of\ndata on a machine with 4-8GB of ram and you have the random_page_cost set\nto 2.0. That is almost completely uncached and I would expect a value of\n10 or more to be closer to reality.\n\n\nSetting random_page_cost to 15 makes the planner choose the nested-loop plan (at least the date range I tried).\n\nI thought that the point of effective cache size was to tell the planner high likely it is for a random page to be in cache. e_c_s tells it how likely it is to still be in cache the second (and subsequent) time the page is visited during the *same query*. It doesn't tell it how likely it is to be in cache the first time it is needed in a given query. (Also, e_c_s is irrelevant for bitmap scans, as they inherently hit every block only once)\nAlso, it doesn't tell how expensive it is to bring it into cache when it is needed. That is what random_page_cost is for. If you tell that those fetches are going to be cheap, then it doesn't matter so much how many of them it is going to have to do.\nCheers,Jeff",
"msg_date": "Wed, 10 Apr 2013 11:15:34 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On 13-04-10 02:06 PM, Jeff Janes wrote:\n> On Wed, Apr 10, 2013 at 6:49 AM, Steve Singer <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n\n> I think the index recheck means your bitmap is overflowing (i.e. needing\n> more space than work_mem) and so keeping only the pages which have at\n> least one match, which means all rows in those pages need to be\n> rechecked. How many rows does the table have? You might be essentially\n> doing a seq scan, but with the additional overhead of the bitmap\n> machinery. Could you do \"explain (analyze,buffers)\", preferably with\n> track_io_timing set to on?\n>\n\ntable_b has 1,530,710,469 rows\n\nAttached is the output with track_io_timings and buffers.\n\n\n\n\n\n> Cheers,\n>\n> Jeff\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 10 Apr 2013 19:54:09 -0400",
"msg_from": "Steve Singer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On 13-04-10 07:54 PM, Steve Singer wrote:\n> On 13-04-10 02:06 PM, Jeff Janes wrote:\n>> On Wed, Apr 10, 2013 at 6:49 AM, Steve Singer <[email protected]\n>> <mailto:[email protected]>> wrote:\n>>\n>\n>> I think the index recheck means your bitmap is overflowing (i.e. needing\n>> more space than work_mem) and so keeping only the pages which have at\n>> least one match, which means all rows in those pages need to be\n>> rechecked. How many rows does the table have? You might be essentially\n>> doing a seq scan, but with the additional overhead of the bitmap\n>> machinery. Could you do \"explain (analyze,buffers)\", preferably with\n>> track_io_timing set to on?\n>>\n>\n> table_b has 1,530,710,469 rows\n>\n> Attached is the output with track_io_timings and buffers.\n>\n\nI've done some more testing with a random_page_cost=20.\n\nThis gives me the nested-loop plan for the various date ranges I've tried.\n\nHowever table_a_2 and table_b_2 are actually partition tables. This \nquery only needs to look at a single partition. When I run this same \nquery against a different partition (a smaller partition, but still \nbigger than cache) it picks hash join plan involving a seq scan of \ntable_b but no bitmap index scan. On this partition the hash-join \nplans tend to take 15 minutes versus 2 minutes when I disable hashjoin \nplans. Bumping random_page_cost higher doesn't fix this.\n\nI think the reason why it is picking the hash join based plans is \nbecause of\n\nIndex Scan using table_b_1_ptid_orgid_ym_unq on table_b_1 b \n(cost=0.00..503.86 rows=1 width=10) (actual time=0.016..0.017 rows=1 \nloops=414249)\n Index Cond: ((a.id = a_id) AND (organization_id = \n2) AND (year = 2013) AND (month = 3))\n Filter: (product_id = 1)\n\nI think we are over-estimating the cost of the index scans in the inner \nloop. This seems similar to what was discussed a few months ago\nhttp://www.postgresql.org/message-id/[email protected]\n\nThis version of PG should have 3e9960e9d935e7e applied. I am trying to \nget the database copied to a machine where I can easily switch PG \nversions and test this against something prior to that commit and also \nagainst a 9.3 build.\n\nSteve\n\n\n>\n>\n>> Cheers,\n>>\n>> Jeff\n>\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 11 Apr 2013 10:20:11 -0400",
"msg_from": "Steve Singer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On Thursday, April 11, 2013, Steve Singer wrote:\n\n>\n> I think the reason why it is picking the hash join based plans is because\n> of\n>\n> Index Scan using table_b_1_ptid_orgid_ym_unq on table_b_1 b\n> (cost=0.00..503.86 rows=1 width=10) (actual time=0.016..0.017 rows=1\n> loops=414249)\n> Index Cond: ((a.id = a_id) AND (organization_id = 2)\n> AND (year = 2013) AND (month = 3))\n> Filter: (product_id = 1)\n>\n\n\nTrying to reason about how the planner estimates costs for the inner side\nof nested loops makes my head hurt.\nSo before doing that, could you run explain (analyze,buffers) on both of\nthese much simpler (but hopefully morally equivalent to this planner node)\nsql:\n\nselect * from table_b_1_b where a_id = <some plausible value> and\norganization_id=2 and year=2013 and month=3\n\nselect * from table_b_1_b where a_id = <some plausible value> and\norganization_id=2 and year=2013 and month=3 and product_id=1\n\n\nOf particular interest here is whether the estimate of 1 row is due to the\nspecificity of the filter, or if the index clauses alone are specific\nenough to drive that estimate. (If you get many rows without the\nproduct_id filter, that would explain the high estimate.).\n\nPlease run with the default cost parameters, or if you can't get the right\nplan with the defaults, specify what the used parameters were.\n\nCheers,\n\nJeff\n\nOn Thursday, April 11, 2013, Steve Singer wrote:\nI think the reason why it is picking the hash join based plans is because of\n\nIndex Scan using table_b_1_ptid_orgid_ym_unq on table_b_1 b (cost=0.00..503.86 rows=1 width=10) (actual time=0.016..0.017 rows=1 loops=414249)\n Index Cond: ((a.id = a_id) AND (organization_id = 2) AND (year = 2013) AND (month = 3))\n Filter: (product_id = 1)Trying to reason about how the planner estimates costs for the inner side of nested loops makes my head hurt. So before doing that, could you run explain (analyze,buffers) on both of these much simpler (but hopefully morally equivalent to this planner node) sql:\nselect * from table_b_1_b where a_id = <some plausible value> and organization_id=2 and year=2013 and month=3select * from table_b_1_b where a_id = <some plausible value> and organization_id=2 and year=2013 and month=3 and product_id=1\nOf particular interest here is whether the estimate of 1 row is due to the specificity of the filter, or if the index clauses alone are specific enough to drive that estimate. (If you get many rows without the product_id filter, that would explain the high estimate.).\nPlease run with the default cost parameters, or if you can't get the right plan with the defaults, specify what the used parameters were.Cheers,Jeff",
"msg_date": "Fri, 12 Apr 2013 18:20:39 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On 13-04-12 09:20 PM, Jeff Janes wrote:\n> On Thursday, April 11, 2013, Steve Singer wrote:\n>\n>\n> I think the reason why it is picking the hash join based plans is\n> because of\n>\n> Index Scan using table_b_1_ptid_orgid_ym_unq on table_b_1 b\n> (cost=0.00..503.86 rows=1 width=10) (actual time=0.016..0.017 rows=1\n> loops=414249)\n> Index Cond: ((a.id <http://a.id> = a_id) AND\n> (organization_id = 2) AND (year = 2013) AND (month = 3))\n> Filter: (product_id = 1)\n>\n>\n>\n> Trying to reason about how the planner estimates costs for the inner\n> side of nested loops makes my head hurt.\n> So before doing that, could you run explain (analyze,buffers) on both of\n> these much simpler (but hopefully morally equivalent to this planner\n> node) sql:\n>\n> select * from table_b_1_b where a_id = <some plausible value> and\n> organization_id=2 and year=2013 and month=3\n>\n> select * from table_b_1_b where a_id = <some plausible value> and\n> organization_id=2 and year=2013 and month=3 and product_id=1\n>\n\ntable_b_1 is a partition of table_b on product_id so when querying \ntable table_b_1 directly they are equivalent\n\nexplain (analyze,buffers) select * FROM table_b_1 where a_id=1128944 and \norganization_id=2 and year=2013 and month=3;\n \nQUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------\n-----------\n Index Scan using table_b_1_ptid_orgid_ym_unq on table_b_1 \n(cost=0.00..50.73 rows=1 width=56) (actual time=60.328..60.330 rows=\n1 loops=1)\n Index Cond: ((a_id = 1128944) AND (organization_id = 2) AND (year = \n2013) AND (month = 3))\n Buffers: shared hit=1 read=5\n Total runtime: 60.378 ms\n(4 rows)\n\n\nThe plans are the same if I do or do not specify the product_id in the \nwhere clause (if I query the parent table and neglect to query the query \nclause it of course queries all the other partitions)\n\n\n\n\n>\n> Of particular interest here is whether the estimate of 1 row is due to\n> the specificity of the filter, or if the index clauses alone are\n> specific enough to drive that estimate. (If you get many rows without\n> the product_id filter, that would explain the high estimate.).\n\nThe index clauses alone , we normally expect 1 row back for a query like \nthat.\n\n\n>\n> Please run with the default cost parameters, or if you can't get the\n> right plan with the defaults, specify what the used parameters were.\n\nindexTotalCost += index->pages * spc_random_page_cost / 100000.0;\n\nIs driving my high costs on the inner loop. The index has 2-5 million \npages depending on the partition . If I run this against 9.2.2 with / \n10000.0 the estimate is even higher.\n\nIf I try this with this with the\n\n*indexTotalCost += log(1.0 + index->pages / 10000.0) * spc_random_page_cost;\n\nfrom 9.3 and I play I can make this work I can it pick the plan on some \npartitions with product_id=2 but not product_id=1. If I remove the \nfudge-factor cost adjustment line I get the nested-loop plan always.\n\nBreaking the index into smaller partial indexes for each year seems to \nbe giving me the plans I want with random_page_cost=2 (I might also try \npartial indexes on the month).\n\nEven with the 9.3 log based fudge-factor we are seeing the fudge-factor \nbeing big enough so that the planner is picking a table scan over the \nindex. A lot of loop iterations can be satisfied by cached pages of the \nindex the fudge-factor doesn't really account for this.\n\n\n\n\n>\n> Cheers,\n>\n> Jeff\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 13 Apr 2013 12:14:10 -0400",
"msg_from": "Steve Singer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On Sat, Apr 13, 2013 at 9:14 AM, Steve Singer <[email protected]>wrote:\n\n>\n>>\n>>\n> indexTotalCost += index->pages * spc_random_page_cost / 100000.0;\n>\n> Is driving my high costs on the inner loop. The index has 2-5 million\n> pages depending on the partition . If I run this against 9.2.2 with /\n> 10000.0 the estimate is even higher.\n>\n> If I try this with this with the\n>\n> *indexTotalCost += log(1.0 + index->pages / 10000.0) *\n> spc_random_page_cost;\n>\n> from 9.3 and I play I can make this work I can it pick the plan on some\n> partitions with product_id=2 but not product_id=1. If I remove the\n> fudge-factor cost adjustment line I get the nested-loop plan always.\n>\n\nThat was only temporarily the formula during 9.3dev. Tom re-did that\nentire part of the code rather substantially in the current tip of 9.3\n(commit 31f38f28b00cbe2b). Now it is based on the number of tuples, and\nthe height, rather than pages, and is multiplied by the cpu_operator_cost\nnot the random_page_cost.\n\ndescentCost = ceil(log(index->tuples) / log(2.0)) * cpu_operator_cost;\n\n...\n\ndescentCost = (index->tree_height + 1) * 50.0 * cpu_operator_cost;\n\n\n\n>\n> Breaking the index into smaller partial indexes for each year seems to be\n> giving me the plans I want with random_page_cost=2 (I might also try\n> partial indexes on the month).\n>\n> Even with the 9.3 log based fudge-factor we are seeing the fudge-factor\n> being big enough so that the planner is picking a table scan over the index.\n\n\nHave you tried it under 9.3 HEAD, rather than just back-porting the\ntemporary\n*indexTotalCost += log(1.0 + index->pages / 10000.0) * spc_random_page_cost;\ncode into 9.2?\n\nIf you are trying to make your own private copy of 9.2, then removing the\nfudge factor altogether is probably the way to go. But if you want to help\nimprove future versions, you probably need to test with the most up-to-date\ndev version.\n\n\n\n> A lot of loop iterations can be satisfied by cached pages of the index\n> the fudge-factor doesn't really account for this.\n>\n\n\nSetting random_page_cost to 2 is already telling it that most of fetches\nare coming from the cache. Of course for the upper blocks of an index even\nmore than \"most\" are likely to be, but the latest dev code takes care of\nthat.\n\nCheers,\n\nJeff\n\nOn Sat, Apr 13, 2013 at 9:14 AM, Steve Singer <[email protected]> wrote:\n\n \nindexTotalCost += index->pages * spc_random_page_cost / 100000.0;\n\nIs driving my high costs on the inner loop. The index has 2-5 million pages depending on the partition . If I run this against 9.2.2 with / 10000.0 the estimate is even higher.\n\nIf I try this with this with the\n\n*indexTotalCost += log(1.0 + index->pages / 10000.0) * spc_random_page_cost;\n\nfrom 9.3 and I play I can make this work I can it pick the plan on some partitions with product_id=2 but not product_id=1. If I remove the fudge-factor cost adjustment line I get the nested-loop plan always.\nThat was only temporarily the formula during 9.3dev. Tom re-did that entire part of the code rather substantially in the current tip of 9.3 (commit 31f38f28b00cbe2b). Now it is based on the number of tuples, and the height, rather than pages, and is multiplied by the cpu_operator_cost not the random_page_cost.\ndescentCost = ceil(log(index->tuples) / log(2.0)) * cpu_operator_cost;...descentCost = (index->tree_height + 1) * 50.0 * cpu_operator_cost;\n \n\nBreaking the index into smaller partial indexes for each year seems to be giving me the plans I want with random_page_cost=2 (I might also try partial indexes on the month).\n\nEven with the 9.3 log based fudge-factor we are seeing the fudge-factor being big enough so that the planner is picking a table scan over the index.Have you tried it under 9.3 HEAD, rather than just back-porting the temporary\n*indexTotalCost += log(1.0 + index->pages / 10000.0) * spc_random_page_cost;code into 9.2?If you are trying to make your own private copy of 9.2, then removing the fudge factor altogether is probably the way to go. But if you want to help improve future versions, you probably need to test with the most up-to-date dev version.\n A lot of loop iterations can be satisfied by cached pages of the index the fudge-factor doesn't really account for this.\nSetting random_page_cost to 2 is already telling it that most of fetches are coming from the cache. Of course for the upper blocks of an index even more than \"most\" are likely to be, but the latest dev code takes care of that.\nCheers,Jeff",
"msg_date": "Sat, 13 Apr 2013 13:54:22 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On 13-04-13 04:54 PM, Jeff Janes wrote:\n> On Sat, Apr 13, 2013 at 9:14 AM, Steve Singer <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n>\n> indexTotalCost += index->pages * spc_random_page_cost / 100000.0;\n>\n> Is driving my high costs on the inner loop. The index has 2-5\n> million pages depending on the partition . If I run this against\n> 9.2.2 with / 10000.0 the estimate is even higher.\n>\n> If I try this with this with the\n>\n> *indexTotalCost += log(1.0 + index->pages / 10000.0) *\n> spc_random_page_cost;\n>\n> from 9.3 and I play I can make this work I can it pick the plan on\n> some partitions with product_id=2 but not product_id=1. If I\n> remove the fudge-factor cost adjustment line I get the nested-loop\n> plan always.\n>\n>\n> That was only temporarily the formula during 9.3dev. Tom re-did that\n> entire part of the code rather substantially in the current tip of 9.3\n> (commit 31f38f28b00cbe2b). Now it is based on the number of tuples, and\n> the height, rather than pages, and is multiplied by the\n> cpu_operator_cost not the random_page_cost.\n>\n> descentCost = ceil(log(index->tuples) / log(2.0)) * cpu_operator_cost;\n>\n> ...\n>\n> descentCost = (index->tree_height + 1) * 50.0 * cpu_operator_cost;\n>\n>\n> Breaking the index into smaller partial indexes for each year seems\n> to be giving me the plans I want with random_page_cost=2 (I might\n> also try partial indexes on the month).\n>\n> Even with the 9.3 log based fudge-factor we are seeing the\n> fudge-factor being big enough so that the planner is picking a table\n> scan over the index.\n>\n>\n> Have you tried it under 9.3 HEAD, rather than just back-porting the\n> temporary\n> *indexTotalCost += log(1.0 + index->pages / 10000.0) * spc_random_page_cost;\n> code into 9.2?\n>\n> If you are trying to make your own private copy of 9.2, then removing\n> the fudge factor altogether is probably the way to go. But if you want\n> to help improve future versions, you probably need to test with the most\n> up-to-date dev version.\n\nI will do that in a few days. I don't have enough disk space on this \ndev server to have a 9.2 datadir and a 9.3 one for this database. Once \nI have a solution that I can use with 9.2 firmed up I can upgrade the \ndatadir to 9.3 and test this. I am hoping I can get a set of partial \nindexes that will give good results with an unmodified 9.2, so far that \nlooks promising but I still have more cases to verify (these indexes \ntake a while to build).\n\n>\n> A lot of loop iterations can be satisfied by cached pages of the\n> index the fudge-factor doesn't really account for this.\n>\n>\n>\n> Setting random_page_cost to 2 is already telling it that most of fetches\n> are coming from the cache. Of course for the upper blocks of an index\n> even more than \"most\" are likely to be, but the latest dev code takes\n> care of that.\n>\n> Cheers,\n>\n> Jeff\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 14 Apr 2013 20:06:42 -0400",
"msg_from": "Steve Singer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
},
{
"msg_contents": "On 13-04-14 08:06 PM, Steve Singer wrote:\n> On 13-04-13 04:54 PM, Jeff Janes wrote:\n\n>>\n>> If you are trying to make your own private copy of 9.2, then removing\n>> the fudge factor altogether is probably the way to go. But if you want\n>> to help improve future versions, you probably need to test with the most\n>> up-to-date dev version.\n>\n> I will do that in a few days. I don't have enough disk space on this\n> dev server to have a 9.2 datadir and a 9.3 one for this database. Once\n> I have a solution that I can use with 9.2 firmed up I can upgrade the\n> datadir to 9.3 and test this. I am hoping I can get a set of partial\n> indexes that will give good results with an unmodified 9.2, so far that\n> looks promising but I still have more cases to verify (these indexes\n> take a while to build).\n>\n\nI've run these queries against a recent master/9.3 and the planner is \npicking the nested-loop plan for using the full index. This is with \nrandom_page_cost=2.\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Apr 2013 13:04:40 -0400",
"msg_from": "Steve Singer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow bitmap heap scans on pg 9.2"
}
] |
[
{
"msg_contents": "Afternoon\n\nSo I just realized I've been just reimporting me Postgres configs from one\nversion to the next, since they were initially customized for my setup.\n\nMaybe from 7.x... And now on 9.2.4\n\nIs there an easy/clean way to adapt my old config file to the new stuff,\nI'm not sure what all has changed, so wondering if I just have to go line\nby line and somehow consolidate old to new, area there any tools or\nmechanism to do so?\n\nI'm sure there are quite a few changes, just not sure if I'm missing\nanything major.\n\nThanks\nTory\n\nAfternoonSo I just realized I've been just reimporting me Postgres configs from one version to the next, since they were initially customized for my setup.\nMaybe from 7.x... And now on 9.2.4Is there an easy/clean way to adapt my old config file to the new stuff, I'm not sure what all has changed, so wondering if I just have to go line by line and somehow consolidate old to new, area there any tools or mechanism to do so?\nI'm sure there are quite a few changes, just not sure if I'm missing anything major.ThanksTory",
"msg_date": "Wed, 10 Apr 2013 14:25:05 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql.conf file from like 7.x to 9.2"
},
{
"msg_contents": "On 04/10/2013 04:25 PM, Tory M Blue wrote:\n\n> Is there an easy/clean way to adapt my old config file to the new stuff,\n> I'm not sure what all has changed, so wondering if I just have to go\n> line by line and somehow consolidate old to new, area there any tools or\n> mechanism to do so?\n\nEhhh, at that point, it's probably best to just start over. we took the \nannotated postgresql.conf and reevaluated each setting and compared it \nto similar/same settings in our old config. Then we made a file of \n*just* the stuff we changed, and made that the postgresql.conf, and keep \nthe annotated version around as defaults.conf to use as a reference. \nThat makes it a lot easier to copy between versions or incorporate \nnew/modified settings.\n\nOf course, all this will probably be moot when 9.3 comes out, as I \nbelieve it has the ability to include configuration fragments. Probably \nanother good opportunity to clean up your configs.\n\nWe jumped from 8.2 to 9.1 in a single upgrade, so while not quite as \nwide as going from 7.x to 9.2, you could probably benefit from a reeval.\n\nThe fundamental settings are pretty much the same, so far as I know. \nSettings we always change:\n\nshared_buffers\nwork_mem\nmaintenance_work_mem\ndefault_statistics_target\neffective_cache_size\nrandom_page_cost\narchive_mode\narchive_command\narchive_timeout\nlog_checkpoints\nlog_min_duration_statement\n\nSettings we usually tweak:\n\nautovacuum_vacuum_scale_factor\nautovacuum_analyze_scale_factor\nautovacuum_freeze_max_age\n\nSettings that are new, and could assist in setting up streaming or backups:\n\nwal_level\nmax_wal_senders\n\nPeople are getting more and more vocal about increasing cpu_tuple_cost, \nas the default is apparently too low in practice.\n\nEverything else? Salt to taste.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Apr 2013 16:42:14 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql.conf file from like 7.x to 9.2"
},
{
"msg_contents": "On Wed, Apr 10, 2013 at 2:42 PM, Shaun Thomas <[email protected]>wrote:\n\n> On 04/10/2013 04:25 PM, Tory M Blue wrote:\n>\n> Is there an easy/clean way to adapt my old config file to the new stuff,\n>> I'm not sure what all has changed, so wondering if I just have to go\n>> line by line and somehow consolidate old to new, area there any tools or\n>> mechanism to do so?\n>>\n>\n> Ehhh, at that point, it's probably best to just start over. we took the\n> annotated postgresql.conf and reevaluated each setting and compared it to\n> similar/same settings in our old config. Then we made a file of *just* the\n> stuff we changed, and made that the postgresql.conf, and keep the annotated\n> version around as defaults.conf to use as a reference. That makes it a lot\n> easier to copy between versions or incorporate new/modified settings.\n>\n> Of course, all this will probably be moot when 9.3 comes out, as I believe\n> it has the ability to include configuration fragments. Probably another\n> good opportunity to clean up your configs.\n>\n> We jumped from 8.2 to 9.1 in a single upgrade, so while not quite as wide\n> as going from 7.x to 9.2, you could probably benefit from a reeval.\n>\n> The fundamental settings are pretty much the same, so far as I know.\n> Settings we always change:\n>\n> shared_buffers\n> work_mem\n> maintenance_work_mem\n> default_statistics_target\n> effective_cache_size\n> random_page_cost\n> archive_mode\n> archive_command\n> archive_timeout\n> log_checkpoints\n> log_min_duration_statement\n>\n> Settings we usually tweak:\n>\n> autovacuum_vacuum_scale_factor\n> autovacuum_analyze_scale_**factor\n> autovacuum_freeze_max_age\n>\n> Settings that are new, and could assist in setting up streaming or backups:\n>\n> wal_level\n> max_wal_senders\n>\n> People are getting more and more vocal about increasing cpu_tuple_cost, as\n> the default is apparently too low in practice.\n>\n> Everything else? Salt to taste.\n>\n> --\n> Shaun Thomas\n>\n>\n> Thanks Shaun\n\nYa I actually didn't upgrade from 7 to 9 in one fell swoop, I've actually\nbeen pretty good at staying up with the releases (thanks to slon), but I\nrealized the other day when i rolled a new 9.2.4 rpm that I just keep using\nmy old postgres config. Now I'm sure we modified it somewhat in 8, but that\nwas probably the last time. So a performance tuning and config file\ncleansing is in order :)\n\nThanks again!\nTory\n\nOn Wed, Apr 10, 2013 at 2:42 PM, Shaun Thomas <[email protected]> wrote:\nOn 04/10/2013 04:25 PM, Tory M Blue wrote:\n\n\nIs there an easy/clean way to adapt my old config file to the new stuff,\nI'm not sure what all has changed, so wondering if I just have to go\nline by line and somehow consolidate old to new, area there any tools or\nmechanism to do so?\n\n\nEhhh, at that point, it's probably best to just start over. we took the annotated postgresql.conf and reevaluated each setting and compared it to similar/same settings in our old config. Then we made a file of *just* the stuff we changed, and made that the postgresql.conf, and keep the annotated version around as defaults.conf to use as a reference. That makes it a lot easier to copy between versions or incorporate new/modified settings.\n\nOf course, all this will probably be moot when 9.3 comes out, as I believe it has the ability to include configuration fragments. Probably another good opportunity to clean up your configs.\n\nWe jumped from 8.2 to 9.1 in a single upgrade, so while not quite as wide as going from 7.x to 9.2, you could probably benefit from a reeval.\n\nThe fundamental settings are pretty much the same, so far as I know. Settings we always change:\n\nshared_buffers\nwork_mem\nmaintenance_work_mem\ndefault_statistics_target\neffective_cache_size\nrandom_page_cost\narchive_mode\narchive_command\narchive_timeout\nlog_checkpoints\nlog_min_duration_statement\n\nSettings we usually tweak:\n\nautovacuum_vacuum_scale_factor\nautovacuum_analyze_scale_factor\nautovacuum_freeze_max_age\n\nSettings that are new, and could assist in setting up streaming or backups:\n\nwal_level\nmax_wal_senders\n\nPeople are getting more and more vocal about increasing cpu_tuple_cost, as the default is apparently too low in practice.\n\nEverything else? Salt to taste.\n\n-- \nShaun Thomas\nThanks ShaunYa I actually didn't upgrade from 7 to 9 in one fell swoop, I've actually been pretty good at staying up with the releases (thanks to slon), but I realized the other day when i rolled a new 9.2.4 rpm that I just keep using my old postgres config. Now I'm sure we modified it somewhat in 8, but that was probably the last time. So a performance tuning and config file cleansing is in order :)\nThanks again!Tory",
"msg_date": "Wed, 10 Apr 2013 14:53:40 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql.conf file from like 7.x to 9.2"
}
] |
[
{
"msg_contents": "Hi,\n\nCould some please explain what these warnings mean in postgres.\nI see these messages a lot when automatic vacuum runs.\n\n 1 tm:2013-04-10 11:39:20.074 UTC db: pid:13766 LOG: automatic vacuum\nof table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate\nscan\n 1 tm:2013-04-10 11:40:22.849 UTC db: pid:14286 LOG: automatic vacuum\nof table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate\nscan\n 1 tm:2013-04-10 11:41:17.500 UTC db: pid:14491 LOG: automatic vacuum\nof table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate\nscan\n\nThank you\nNik\n\nHi,Could some please explain what these warnings mean in postgres.I see these messages a lot when automatic vacuum runs.\n 1 tm:2013-04-10 11:39:20.074 UTC db: pid:13766 LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate scan 1 tm:2013-04-10 11:40:22.849 UTC db: pid:14286 LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate scan\n 1 tm:2013-04-10 11:41:17.500 UTC db: pid:14491 LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate scanThank you\nNik",
"msg_date": "Wed, 10 Apr 2013 14:58:11 -0700",
"msg_from": "Nik Tek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres log(pg_logs) have lots of message"
},
{
"msg_contents": "That's just that some other process has some DML going on in the table that is supposed to be truncated. No lock, no truncate.\n\nHTH,\nBambi.\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Nik Tek\nSent: Wednesday, April 10, 2013 4:58 PM\nTo: [email protected]; [email protected]\nSubject: [ADMIN] Postgres log(pg_logs) have lots of message\n\nHi,\n\nCould some please explain what these warnings mean in postgres.\nI see these messages a lot when automatic vacuum runs.\n\n 1 tm:2013-04-10 11:39:20.074 UTC db: pid:13766 LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate scan\n 1 tm:2013-04-10 11:40:22.849 UTC db: pid:14286 LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate scan\n 1 tm:2013-04-10 11:41:17.500 UTC db: pid:14491 LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate scan\n\nThank you\nNik\n\n\n\n\nThis email and any files included with it may contain privileged,\nproprietary and/or confidential information that is for the sole use\nof the intended recipient(s). Any disclosure, copying, distribution,\nposting, or use of the information contained in or attached to this\nemail is prohibited unless permitted by the sender. If you have\nreceived this email in error, please immediately notify the sender\nvia return email, telephone, or fax and destroy this original transmission\nand its included files without reading or saving it in any manner.\nThank you.\n\n\n\n\n\n\n\n\n\nThat’s just that some other process has some DML going on in the table that is supposed to be truncated. No lock, no truncate.\n\nHTH,\nBambi.\n \n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Nik Tek\nSent: Wednesday, April 10, 2013 4:58 PM\nTo: [email protected]; [email protected]\nSubject: [ADMIN] Postgres log(pg_logs) have lots of message\n\n \n\nHi,\n\n \n\n\nCould some please explain what these warnings mean in postgres.\n\n\nI see these messages a lot when automatic vacuum runs.\n\n\n \n\n\n\n\n 1 tm:2013-04-10 11:39:20.074 UTC db: pid:13766 LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate scan\n\n\n 1 tm:2013-04-10 11:40:22.849 UTC db: pid:14286 LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate scan\n\n\n 1 tm:2013-04-10 11:41:17.500 UTC db: pid:14491 LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate scan\n\n\n\n \n\n\nThank you\n\n\nNik\n\n\n\n\n\n\nThis email and any files included with it may contain privileged,\nproprietary and/or confidential information that is for the sole use\nof the intended recipient(s). Any disclosure, copying, distribution,\nposting, or use of the information contained in or attached to this\nemail is prohibited unless permitted by the sender. If you have\nreceived this email in error, please immediately notify the sender\nvia return email, telephone, or fax and destroy this original transmission\nand its included files without reading or saving it in any manner.\nThank you.",
"msg_date": "Wed, 10 Apr 2013 22:00:13 +0000",
"msg_from": "Bambi Bellows <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres log(pg_logs) have lots of message"
},
{
"msg_contents": "Hi Bambi,\n\nThank you the prompt reply.\n\nThis table is very volatile, lot of inserts/updates happen on this\ntables(atleast 20~30 inserts/min).\nWhen auto vacuum tries to run on this table, I get this warning.\n\nIs there a way, I force it to happen, because the table/indexes statistics\nare becoming stale very quickly.\n\nThank you\nNik\n\nHi Bambi,Thank you the prompt reply.This table is very volatile, lot of inserts/updates happen on this tables(atleast 20~30 inserts/min).\nWhen auto vacuum tries to run on this table, I get this warning.Is there a way, I force it to happen, because the table/indexes statistics are becoming stale very quickly.\nThank youNik",
"msg_date": "Wed, 10 Apr 2013 15:05:55 -0700",
"msg_from": "Nik Tek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] Postgres log(pg_logs) have lots of message"
},
{
"msg_contents": "On Wednesday, April 10, 2013, Nik Tek wrote:\n\n> Hi Bambi,\n>\n> Thank you the prompt reply.\n>\n> This table is very volatile, lot of inserts/updates happen on this\n> tables(atleast 20~30 inserts/min).\n>\n\nThat number of inserts per minute is not all that many. I suspect that you\nhave sessions which are holding open transactions (and thus locks) for much\nlonger than necessary, and it is this idling on the locks, not the active\ninsertions, that is causing the current problem.\n\nIf this is true, you should try to find the idle-in-transaction connections\nand fix them, because even if they didn't cause this particular problem,\nthey will cause other ones.\n\n\n> When auto vacuum tries to run on this table, I get this warning.\n>\n\n>> LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire\nexclusive lock for truncate scan\n\nYou have at least 8 MB of empty space at the end of the table, but to\nremove it it needs to acquire a lock that it cannot get and so it gives up.\n Unfortunately it now gives up on the following autoanalyze as well. In\n9.2.2 and before, it would also give up on reclaiming the free space, but\nwould likely still do the autoanalyze, which is probably why you didn't see\nit before.\n\n\n> Is there a way, I force it to happen, because the table/indexes statistics\n> are becoming stale very quickly.\n>\n\nThis is something introduced in 9.2.3, and will probably be fixed whenever\n9.2.5 comes out.\n\nIn the mean time, a manual ANALYZE (But not a VACUUM ANALYZE, because would\nfail the same was autvac does) would fix the stats, but it would have to be\nrepeated often as they would just get stale again.\n\nYou could try a VACUUM FULL or CLUSTER if you can tolerate the lock it\nwould hold on the table while it operates. The reason that might solve the\nproblem for you is that it would clear out the empty space, and therefore\nfuture autovac won't see that empty space and try to truncate it.\n Depending on how your table is used, either more empty space could\naccumulate at the end of the table causing the problem to recur, or maybe\nit would fix the problem for good.\n\nCheers,\n\nJeff\n\n>\n\nOn Wednesday, April 10, 2013, Nik Tek wrote:Hi Bambi,Thank you the prompt reply.\nThis table is very volatile, lot of inserts/updates happen on this tables(atleast 20~30 inserts/min).That number of inserts per minute is not all that many. I suspect that you have sessions which are holding open transactions (and thus locks) for much longer than necessary, and it is this idling on the locks, not the active insertions, that is causing the current problem.\nIf this is true, you should try to find the idle-in-transaction connections and fix them, because even if they didn't cause this particular problem, they will cause other ones. \n\nWhen auto vacuum tries to run on this table, I get this warning.>> LOG: automatic vacuum of table \"DB1.nic.pvxt\": could not (re)acquire exclusive lock for truncate scan \nYou have at least 8 MB of empty space at the end of the table, but to remove it it needs to acquire a lock that it cannot get and so it gives up. Unfortunately it now gives up on the following autoanalyze as well. In 9.2.2 and before, it would also give up on reclaiming the free space, but would likely still do the autoanalyze, which is probably why you didn't see it before.\nIs there a way, I force it to happen, because the table/indexes statistics are becoming stale very quickly.\nThis is something introduced in 9.2.3, and will probably be fixed whenever 9.2.5 comes out.In the mean time, a manual ANALYZE (But not a VACUUM ANALYZE, because would fail the same was autvac does) would fix the stats, but it would have to be repeated often as they would just get stale again. \nYou could try a VACUUM FULL or CLUSTER if you can tolerate the lock it would hold on the table while it operates. The reason that might solve the problem for you is that it would clear out the empty space, and therefore future autovac won't see that empty space and try to truncate it. Depending on how your table is used, either more empty space could accumulate at the end of the table causing the problem to recur, or maybe it would fix the problem for good.\nCheers,Jeff",
"msg_date": "Thu, 11 Apr 2013 23:14:10 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgres log(pg_logs) have lots of message"
}
] |
[
{
"msg_contents": "I've configured 2 table like this\n\nCREATE TABLE \"public\".\"User_Statement_Pivot\" (\n\"Email\" varchar(50),\n\"UserId\" varchar(50),\n\"ShortId\" varchar(50),\n\"LastDirectJobMailSentDateTime\" int8,\n\"What\" varchar(4096),\n\"Where\" varchar(4096)\n)\nWITH (OIDS=FALSE)\n;\n\nALTER TABLE \"public\".\"User_Statement_Pivot\" OWNER TO \"postgres\";\n\n\n\nCREATE INDEX \"IX_btree_usp_userIdShortIdEmailLastDJMSDT\" ON\n\"public\".\"User_Statement_Pivot\" USING btree (\"UserId\", \"ShortId\", \"Email\",\n\"LastDirectJobMailSentDateTime\");\n\nCREATE INDEX \"ix_fulltext_usp_what\" ON \"public\".\"User_Statement_Pivot\"\n(\"to_tsvector('italian'::regconfig, \"\"What\"\"::text)\",\n\"to_tsvector('italian'::regconfig, \"\"What\"\"::text)\");\n\nCREATE INDEX \"ix_fulltext_usp_what_en\" ON \"public\".\"User_Statement_Pivot\"\n(\"to_tsvector('english'::regconfig, \"\"What\"\"::text)\",\n\"to_tsvector('english'::regconfig, \"\"What\"\"::text)\");\n\nCREATE INDEX \"ix_fulltext_usp_where\" ON \"public\".\"User_Statement_Pivot\"\n(\"to_tsvector('italian'::regconfig, \"\"Where\"\"::text)\",\n\"to_tsvector('italian'::regconfig, \"\"Where\"\"::text)\");\n\nCREATE INDEX \"ix_usp_what\" ON \"public\".\"User_Statement_Pivot\" USING btree\n(\"What\");\n\n\n CREATE TABLE \"public\".\"User_Statement_Pivot_2\" (\n\"Email\" varchar(50),\n\"UserId\" varchar(50),\n\"ShortId\" varchar(50),\n\"LastDirectJobMailSentDateTime\" int8,\n\"Where\" varchar(4096),\n\"tsv\" tsvector\n)\nWITH (OIDS=FALSE)\n;\n\nALTER TABLE \"public\".\"User_Statement_Pivot_2\" OWNER TO \"postgres\";\n\nCREATE INDEX \"IX_btree_usp2_userIdShortIdEmailLastDJMSDT\" ON\n\"public\".\"User_Statement_Pivot_2\" USING btree (\"UserId\", \"ShortId\",\n\"Email\", \"LastDirectJobMailSentDateTime\");\n\nCREATE INDEX \"textsearch_tsv\" ON \"public\".\"User_Statement_Pivot_2\" (\"tsv\");\n\nCREATE TRIGGER \"tsvectorupdate\" BEFORE INSERT OR UPDATE ON\n\"public\".\"User_Statement_Pivot_2\"\nFOR EACH ROW\nEXECUTE PROCEDURE \"tsvector_update_trigger\"('tsv', 'pg_catalog.italian',\n'What');\n\n\nColumn \"What\" (table User_Statement_Pivot is just a single word or max 2\nwords separeted by space \" \" (ex: programmatore .NET), and tsv\n(table User_Statement_Pivot_2) is populate by materializing column with a\nts_vector of \"What\".\n\nNow if i perform those 2 queries\n\n SELECT * FROM \"User_Statement_Pivot_2\"\nwhere tsv @@ to_tsquery('italian','programmatore|analista')\n\nSELECT * FROM \"User_Statement_Pivot\"\nwhere to_tsvector('italian', tsv) @@\nto_tsquery('italian','programmatore|analista')\n\nRecords on Tables (are same) like 8 milion.\n\nExecution time of 1st query is 2 seconds (result set like 13.027)\nExecution time of 2st query is 3 seconds (result set like 13.027) same\nrecords\n\nThose are query analize\n\nBitmap Heap Scan on \"User_Statement_Pivot\" (cost=1025.27..109801.47\nrows=76463 width=88) (actual time=3.186..12.608 rows=13027 loops=1)\n Recheck Cond: (to_tsvector('italian'::regconfig, (\"What\")::text) @@\n'''programm'' | ''anal'''::tsquery)\n -> Bitmap Index Scan on ix_fulltext_usp_what (cost=0.00..1006.16\nrows=76463 width=0) (actual time=2.315..2.315 rows=13027 loops=1)\n Index Cond: (to_tsvector('italian'::regconfig, (\"What\")::text) @@\n'''programm'' | ''anal'''::tsquery)\nTotal runtime: 12.972 ms\n\n\nBitmap Heap Scan on \"User_Statement_Pivot_2\" (cost=205.46..43876.92\nrows=15068 width=102) (actual time=3.135..18.141 rows=13027 loops=1)\n Recheck Cond: (tsv @@ '''programm'' | ''anal'''::tsquery)\n -> Bitmap Index Scan on textsearch_tsv (cost=0.00..201.69 rows=15068\nwidth=0) (actual time=2.254..2.254 rows=13027 loops=1)\n Index Cond: (tsv @@ '''programm'' | ''anal'''::tsquery)\nTotal runtime: 18.502 ms\n\nConfiguration\nPostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)\n4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\n\n\nIf i increase words number in to_tsquery in OR condition those 2 queries\nare more different (exponentially). I don't understand why a materialized\ncolumn is more slow than a calculeted one.\n\n-- \nLuigi Saggese\nAnalyst Developer\n\n*Work:* +39 328 75 16 236\n*Email:* [email protected]\n*IM:* luigisaggese (Skype)\n *http://it.linkedin.com/in/luigisaggese*\n\nI've configured 2 table like thisCREATE TABLE \"public\".\"User_Statement_Pivot\" (\"Email\" varchar(50),\"UserId\" varchar(50),\n\"ShortId\" varchar(50),\"LastDirectJobMailSentDateTime\" int8,\"What\" varchar(4096),\"Where\" varchar(4096))WITH (OIDS=FALSE)\n;ALTER TABLE \"public\".\"User_Statement_Pivot\" OWNER TO \"postgres\";CREATE INDEX \"IX_btree_usp_userIdShortIdEmailLastDJMSDT\" ON \"public\".\"User_Statement_Pivot\" USING btree (\"UserId\", \"ShortId\", \"Email\", \"LastDirectJobMailSentDateTime\");\nCREATE INDEX \"ix_fulltext_usp_what\" ON \"public\".\"User_Statement_Pivot\" (\"to_tsvector('italian'::regconfig, \"\"What\"\"::text)\", \"to_tsvector('italian'::regconfig, \"\"What\"\"::text)\");\nCREATE INDEX \"ix_fulltext_usp_what_en\" ON \"public\".\"User_Statement_Pivot\" (\"to_tsvector('english'::regconfig, \"\"What\"\"::text)\", \"to_tsvector('english'::regconfig, \"\"What\"\"::text)\");\nCREATE INDEX \"ix_fulltext_usp_where\" ON \"public\".\"User_Statement_Pivot\" (\"to_tsvector('italian'::regconfig, \"\"Where\"\"::text)\", \"to_tsvector('italian'::regconfig, \"\"Where\"\"::text)\");\nCREATE INDEX \"ix_usp_what\" ON \"public\".\"User_Statement_Pivot\" USING btree (\"What\"); CREATE TABLE \"public\".\"User_Statement_Pivot_2\" (\n\"Email\" varchar(50),\"UserId\" varchar(50),\"ShortId\" varchar(50),\"LastDirectJobMailSentDateTime\" int8,\"Where\" varchar(4096),\n\"tsv\" tsvector)WITH (OIDS=FALSE);ALTER TABLE \"public\".\"User_Statement_Pivot_2\" OWNER TO \"postgres\";\nCREATE INDEX \"IX_btree_usp2_userIdShortIdEmailLastDJMSDT\" ON \"public\".\"User_Statement_Pivot_2\" USING btree (\"UserId\", \"ShortId\", \"Email\", \"LastDirectJobMailSentDateTime\");\nCREATE INDEX \"textsearch_tsv\" ON \"public\".\"User_Statement_Pivot_2\" (\"tsv\");CREATE TRIGGER \"tsvectorupdate\" BEFORE INSERT OR UPDATE ON \"public\".\"User_Statement_Pivot_2\"\nFOR EACH ROWEXECUTE PROCEDURE \"tsvector_update_trigger\"('tsv', 'pg_catalog.italian', 'What');Column \"What\" (table User_Statement_Pivot is just a single word or max 2 words separeted by space \" \" (ex: programmatore .NET), and tsv (table User_Statement_Pivot_2) is populate by materializing column with a ts_vector of \"What\".\nNow if i perform those 2 queries SELECT * FROM \"User_Statement_Pivot_2\"where tsv @@ to_tsquery('italian','programmatore|analista')\n SELECT * FROM \"User_Statement_Pivot\"where to_tsvector('italian', tsv) @@ to_tsquery('italian','programmatore|analista')\nRecords on Tables (are same) like 8 milion. Execution time of 1st query is 2 seconds (result set like 13.027)Execution time of 2st query is 3 seconds (result set like 13.027) same records\nThose are query analizeBitmap Heap Scan on \"User_Statement_Pivot\" (cost=1025.27..109801.47 rows=76463 width=88) (actual time=3.186..12.608 rows=13027 loops=1)\n Recheck Cond: (to_tsvector('italian'::regconfig, (\"What\")::text) @@ '''programm'' | ''anal'''::tsquery) -> Bitmap Index Scan on ix_fulltext_usp_what (cost=0.00..1006.16 rows=76463 width=0) (actual time=2.315..2.315 rows=13027 loops=1)\n Index Cond: (to_tsvector('italian'::regconfig, (\"What\")::text) @@ '''programm'' | ''anal'''::tsquery)Total runtime: 12.972 ms\nBitmap Heap Scan on \"User_Statement_Pivot_2\" (cost=205.46..43876.92 rows=15068 width=102) (actual time=3.135..18.141 rows=13027 loops=1) Recheck Cond: (tsv @@ '''programm'' | ''anal'''::tsquery)\n -> Bitmap Index Scan on textsearch_tsv (cost=0.00..201.69 rows=15068 width=0) (actual time=2.254..2.254 rows=13027 loops=1) Index Cond: (tsv @@ '''programm'' | ''anal'''::tsquery)\nTotal runtime: 18.502 msConfiguration PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\nIf i increase words number in to_tsquery in OR condition those 2 queries are more different (exponentially). I don't understand why a materialized column is more slow than a calculeted one.\n-- Luigi SaggeseAnalyst DeveloperWork: +39 328 75 16 236\nEmail: [email protected]\nIM: luigisaggese (Skype)\n http://it.linkedin.com/in/luigisaggese",
"msg_date": "Thu, 11 Apr 2013 09:47:20 +0200",
"msg_from": "Luigi Saggese <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance ts_vector fulltext search"
},
{
"msg_contents": "Luigi Saggese <[email protected]> writes:\n> I've configured 2 table like this\n> ...\n> CREATE INDEX \"ix_fulltext_usp_what\" ON \"public\".\"User_Statement_Pivot\"\n> (\"to_tsvector('italian'::regconfig, \"\"What\"\"::text)\",\n> \"to_tsvector('italian'::regconfig, \"\"What\"\"::text)\");\n\nWhen I try that I get\n\nERROR: column \"to_tsvector('italian'::regconfig, \"What\"::text)\" does not exist\n\nas indeed I should, because you seem to be confused about SQL quoting\nrules --- that whole expression is being taken as a name. However, the\nbigger problem here is that you're creating btree indexes, which are not\nuseful for full text search. They need to be gin or gist indexes. See\nexamples at\n\nhttp://www.postgresql.org/docs/9.2/static/textsearch-tables.html#TEXTSEARCH-TABLES-INDEX\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Apr 2013 10:58:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance ts_vector fulltext search"
}
] |
[
{
"msg_contents": "All,\n\nWe have performance issue in Psql8.3 vacuum run. The CPU usage going 300+ and application getting slow. How do we avoid high cpu and smooth vacuum tables.\n\nThanks in advance\nPalani\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Apr 2013 21:09:33 +0000",
"msg_from": "\"Thiyagarajan, Palaniappan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PsqL8.3"
},
{
"msg_contents": "On 11/04/13 22:09, Thiyagarajan, Palaniappan wrote:\n> All,\n>\n> We have performance issue in Psql8.3 vacuum run. The CPU usage going\n> 300+ and application getting slow. How do we avoid high cpu and\n> smooth vacuum tables.\n\nI'm afraid this isn't nearly enough information for anyone to help.\n\n1. Full version details, some idea of hardware and database size might \nbe useful. Exactly when this happens etc.\n\n2. Is this a manual vacuum or autovacuum?\n\n3. Are you familiar with the manual page regarding the vacuum settings?\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-autovacuum.html\n\n4. If so, what changes have you made?\n\n5. You are aware that 8.3 is end-of-life? And you are running 8.3.23 \nuntil you upgrade, aren't you?\nhttp://www.postgresql.org/support/versioning/\n\n\nTypically you'd expect disk i/o to be the limiting factor with vacuum \nrather than CPU. However, it might just be that I've misunderstood your \ndescription. More details please.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Apr 2013 11:02:02 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PsqL8.3"
}
] |
[
{
"msg_contents": "Our server is running postgresql 8.4.15. During day time the cpu usage always\naround 80%, but it's not IO bound. The swap space is looking OK also. Also\nwe setup pgbadger and enable all logs to monitor the slow query but they all\nfinished quick. Usually it has 60 incoming connections, and we have\npgbouncer to control the pool. Any suggestions I can look into it? Thank\nyou.\n________________________________________________________________________\ntop - 19:21:28 up 33 days, 9:50, 1 user, load average: 23.07, 22.39,\n18.68\nTasks: 221 total, 19 running, 202 sleeping, 0 stopped, 0 zombie\nCpu0 : 96.3%us, 1.3%sy, 0.0%ni, 1.0%id, 0.3%wa, 0.0%hi, 1.0%si, \n0.0%st\nCpu1 : 98.0%us, 0.7%sy, 0.0%ni, 0.7%id, 0.3%wa, 0.0%hi, 0.3%si, \n0.0%st\nCpu2 : 98.7%us, 0.7%sy, 0.0%ni, 0.3%id, 0.0%wa, 0.0%hi, 0.3%si, \n0.0%st\nCpu3 : 97.7%us, 0.7%sy, 0.0%ni, 0.7%id, 0.7%wa, 0.0%hi, 0.3%si, \n0.0%st\nCpu4 : 96.4%us, 1.3%sy, 0.0%ni, 0.7%id, 1.0%wa, 0.0%hi, 0.7%si, \n0.0%st\nCpu5 : 98.3%us, 0.7%sy, 0.0%ni, 0.7%id, 0.3%wa, 0.0%hi, 0.0%si, \n0.0%st\nCpu6 : 98.7%us, 0.7%sy, 0.0%ni, 0.7%id, 0.0%wa, 0.0%hi, 0.0%si, \n0.0%st\nCpu7 : 97.7%us, 0.7%sy, 0.0%ni, 1.0%id, 0.3%wa, 0.0%hi, 0.3%si, \n0.0%st\nMem: 32959796k total, 32802508k used, 157288k free, 145484k buffers\nSwap: 67108856k total, 1596k used, 67107260k free, 29718168k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n32119 postgres 16 0 8005m 2.7g 2.7g R 48.8 8.7 9:11.03 postmaster \n 3688 postgres 15 0 8005m 2.4g 2.4g R 41.5 7.7 7:52.75 postmaster \n23530 postgres 15 0 8003m 2.9g 2.8g S 39.2 9.1 11:02.56 postmaster \n23528 postgres 16 0 8002m 2.8g 2.8g S 37.5 8.9 11:13.92 postmaster \n23534 postgres 15 0 8007m 2.9g 2.9g S 36.5 9.3 11:21.66 postmaster \n\n_______________________________________________________________________\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n18 5 1596 157668 145572 29696372 0 0 150 160 0 1 36 1 60 \n4 0\n27 2 1596 163952 145476 29689472 0 0 1408 1472 2142 2190 97 1 0 \n1 0\n21 1 1596 162824 145488 29691976 0 0 1548 3300 2269 2592 96 1 2 \n1 0\n24 3 1596 161248 145488 29693124 0 0 1144 1612 3426 2149 98 1 1 \n0 0\n11 2 1596 158900 145488 29696108 0 0 2368 1776 2189 2324 98 1 0 \n1 0\n15 3 1596 162144 145324 29693576 0 0 4412 1592 2669 2852 97 1 1 \n1 0\n\n____________________________________________________________________\ncheckpoint_completion_target = 0.9\nmax_connections = 160\nshared_buffers = 7680MB\nwork_mem = 104MB\nmaintenance_work_mem = 1GB \neffective_cache_size = 22GB\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/High-CPU-usage-buy-low-I-O-wait-tp5751860.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Apr 2013 16:30:54 -0700 (PDT)",
"msg_from": "bing1221 <[email protected]>",
"msg_from_op": true,
"msg_subject": "High CPU usage buy low I/O wait"
},
{
"msg_contents": "On Thu, Apr 11, 2013 at 04:30:54PM -0700, bing1221 wrote:\n> Our server is running postgresql 8.4.15. During day time the cpu usage always\n> around 80%, but it's not IO bound. The swap space is looking OK also. Also\n> we setup pgbadger and enable all logs to monitor the slow query but they all\n> finished quick. Usually it has 60 incoming connections, and we have\n> pgbouncer to control the pool. Any suggestions I can look into it? Thank\n> you.\n\nWhy do you think you have a problem? It sounds like normal system usage.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Apr 2013 15:54:08 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU usage buy low I/O wait"
}
] |
[
{
"msg_contents": "Hi guys.\nI compiled my postrges server (9.1.4) with default segment size (16MB).\nShould it be enough? Should I increase this size in compilation?\n\nHi guys.I compiled my postrges server (9.1.4) with default segment size (16MB).Should it be enough? Should I increase this size in compilation?",
"msg_date": "Fri, 12 Apr 2013 13:45:07 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Segment best size"
},
{
"msg_contents": "On Apr 12, 2013, at 9:45 AM, Rodrigo Barboza wrote:\n\n> Hi guys.\n> I compiled my postrges server (9.1.4) with default segment size (16MB).\n> Should it be enough? Should I increase this size in compilation?\n\nUnlike some default values in the configuration file, the compiled-in defaults work well for most people.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Apr 2013 10:52:17 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Segment best size"
},
{
"msg_contents": "On Fri, Apr 12, 2013 at 2:52 PM, Ben Chobot <[email protected]> wrote:\n\n> On Apr 12, 2013, at 9:45 AM, Rodrigo Barboza wrote:\n>\n> > Hi guys.\n> > I compiled my postrges server (9.1.4) with default segment size (16MB).\n> > Should it be enough? Should I increase this size in compilation?\n>\n> Unlike some default values in the configuration file, the compiled-in\n> defaults work well for most people.\n\n\nThanks!\n\nOn Fri, Apr 12, 2013 at 2:52 PM, Ben Chobot <[email protected]> wrote:\nOn Apr 12, 2013, at 9:45 AM, Rodrigo Barboza wrote:\n\n> Hi guys.\n> I compiled my postrges server (9.1.4) with default segment size (16MB).\n> Should it be enough? Should I increase this size in compilation?\n\nUnlike some default values in the configuration file, the compiled-in defaults work well for most people.Thanks!",
"msg_date": "Fri, 12 Apr 2013 16:17:28 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Segment best size"
},
{
"msg_contents": "On Friday, April 12, 2013, Rodrigo Barboza wrote:\n\n> Hi guys.\n> I compiled my postrges server (9.1.4) with default segment size (16MB).\n> Should it be enough? Should I increase this size in compilation?\n>\n\nTo recommend that you deviate from the defaults, we would have to know\nsomething about your server, or how you intend to use it. But you haven't\ntold us any of those things. Is there something about your intended use of\nthe server that made you wonder about this setting in particular?\n\nAnyway, wal seg size is one of the last things I would worry about\nchanging. Also, I suspect non-default settings of that parameter get\nalmost zero testing (either alpha, beta, or in the field), and so would\nconsider it mildly dangerous to deploy to production.\n\nCheers,\n\nJeff\n\nOn Friday, April 12, 2013, Rodrigo Barboza wrote:Hi guys.I compiled my postrges server (9.1.4) with default segment size (16MB).\nShould it be enough? Should I increase this size in compilation?To recommend that you deviate from the defaults, we would have to know something about your server, or how you intend to use it. But you haven't told us any of those things. Is there something about your intended use of the server that made you wonder about this setting in particular?\nAnyway, wal seg size is one of the last things I would worry about changing. Also, I suspect non-default settings of that parameter get almost zero testing (either alpha, beta, or in the field), and so would consider it mildly dangerous to deploy to production.\nCheers,Jeff",
"msg_date": "Fri, 12 Apr 2013 18:20:38 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Segment best size"
},
{
"msg_contents": "On Fri, Apr 12, 2013 at 10:20 PM, Jeff Janes <[email protected]> wrote:\n\n> On Friday, April 12, 2013, Rodrigo Barboza wrote:\n>\n>> Hi guys.\n>> I compiled my postrges server (9.1.4) with default segment size (16MB).\n>> Should it be enough? Should I increase this size in compilation?\n>>\n>\n> To recommend that you deviate from the defaults, we would have to know\n> something about your server, or how you intend to use it. But you haven't\n> told us any of those things. Is there something about your intended use of\n> the server that made you wonder about this setting in particular?\n>\n> Anyway, wal seg size is one of the last things I would worry about\n> changing. Also, I suspect non-default settings of that parameter get\n> almost zero testing (either alpha, beta, or in the field), and so would\n> consider it mildly dangerous to deploy to production.\n>\n> Cheers,\n>\n> Jeff\n>\n\n\nI was receiving warning messages of pgstat timeout.\nI started looking for the solution for this problem and this doubt came out.\nBut it doesn't seem to be the problem.\n\nOn Fri, Apr 12, 2013 at 10:20 PM, Jeff Janes <[email protected]> wrote:\nOn Friday, April 12, 2013, Rodrigo Barboza wrote:\nHi guys.I compiled my postrges server (9.1.4) with default segment size (16MB).\nShould it be enough? Should I increase this size in compilation?To recommend that you deviate from the defaults, we would have to know something about your server, or how you intend to use it. But you haven't told us any of those things. Is there something about your intended use of the server that made you wonder about this setting in particular?\nAnyway, wal seg size is one of the last things I would worry about changing. Also, I suspect non-default settings of that parameter get almost zero testing (either alpha, beta, or in the field), and so would consider it mildly dangerous to deploy to production.\nCheers,Jeff\nI was receiving warning messages of pgstat timeout.I started looking for the solution for this problem and this doubt came out.\nBut it doesn't seem to be the problem.",
"msg_date": "Sat, 13 Apr 2013 14:29:13 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Segment best size"
},
{
"msg_contents": "On Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza\n<[email protected]>wrote:\n\n>\n>\n> I was receiving warning messages of pgstat timeout.\n> I started looking for the solution for this problem and this doubt came\n> out.\n> But it doesn't seem to be the problem.\n>\n\nUsually that just means that you are dirtying pages faster than your hard\ndrives can write them out, leading to IO contention. Sometimes you can do\nsomething about this by changing the work-load to dirty pages in more\ncompact regions, or by changing the configuration. Often the only\neffective thing you can do is buy more hard-drives.\n\nCheers,\n\nJeff\n\nOn Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza <[email protected]> wrote:\n\nI was receiving warning messages of pgstat timeout.I started looking for the solution for this problem and this doubt came out.\nBut it doesn't seem to be the problem.\nUsually that just means that you are dirtying pages faster than your hard drives can write them out, leading to IO contention. Sometimes you can do something about this by changing the work-load to dirty pages in more compact regions, or by changing the configuration. Often the only effective thing you can do is buy more hard-drives.\nCheers,Jeff",
"msg_date": "Sat, 13 Apr 2013 12:51:48 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Segment best size"
},
{
"msg_contents": "On Sat, Apr 13, 2013 at 4:51 PM, Jeff Janes <[email protected]> wrote:\n\n> On Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza <[email protected]\n> > wrote:\n>\n>>\n>>\n>> I was receiving warning messages of pgstat timeout.\n>> I started looking for the solution for this problem and this doubt came\n>> out.\n>> But it doesn't seem to be the problem.\n>>\n>\n> Usually that just means that you are dirtying pages faster than your hard\n> drives can write them out, leading to IO contention. Sometimes you can do\n> something about this by changing the work-load to dirty pages in more\n> compact regions, or by changing the configuration. Often the only\n> effective thing you can do is buy more hard-drives.\n>\n> Cheers,\n>\n> Jeff\n>\n\n\nWould it help if I changed the checkpoint_completion_target to something\nclose to 0.7 (mine is the default 0.5) or raise the checkpoint_segments (it\nis 32 now)?\n\nOn Sat, Apr 13, 2013 at 4:51 PM, Jeff Janes <[email protected]> wrote:\nOn Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza <[email protected]> wrote:\n\n\n\nI was receiving warning messages of pgstat timeout.I started looking for the solution for this problem and this doubt came out.\nBut it doesn't seem to be the problem.\nUsually that just means that you are dirtying pages faster than your hard drives can write them out, leading to IO contention. Sometimes you can do something about this by changing the work-load to dirty pages in more compact regions, or by changing the configuration. Often the only effective thing you can do is buy more hard-drives.\nCheers,Jeff\nWould it help if I changed the checkpoint_completion_target to something close to 0.7 (mine is the default 0.5) or raise the checkpoint_segments (it is 32 now)?",
"msg_date": "Sun, 14 Apr 2013 00:01:13 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Segment best size"
},
{
"msg_contents": "On Sat, Apr 13, 2013 at 9:01 PM, Rodrigo Barboza\n<[email protected]> wrote:\n\n> Would it help if I changed the checkpoint_completion_target to something close to 0.7 (mine is the default 0.5) or raise the checkpoint_segments (it is 32 now)?\n\nThat depends. What are your access patterns like? Do you have 1 or 2\nwriters, a dozen, a hundred? If you've got hundreds of active writers\nthen look at pooling an also commit_siblings and commit_delay.\n\n\n--\nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 13 Apr 2013 23:20:56 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Segment best size"
},
{
"msg_contents": "On Saturday, April 13, 2013, Rodrigo Barboza wrote:\n\n>\n>\n>\n> On Sat, Apr 13, 2013 at 4:51 PM, Jeff Janes <[email protected]<javascript:_e({}, 'cvml', '[email protected]');>\n> > wrote:\n>\n>> On Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza <\n>> [email protected] <javascript:_e({}, 'cvml',\n>> '[email protected]');>> wrote:\n>>\n>>>\n>>>\n>>> I was receiving warning messages of pgstat timeout.\n>>> I started looking for the solution for this problem and this doubt came\n>>> out.\n>>> But it doesn't seem to be the problem.\n>>>\n>>\n>> Usually that just means that you are dirtying pages faster than your hard\n>> drives can write them out, leading to IO contention. Sometimes you can do\n>> something about this by changing the work-load to dirty pages in more\n>> compact regions, or by changing the configuration. Often the only\n>> effective thing you can do is buy more hard-drives.\n>>\n>\nI meant congestion, not contention.\n\n\n>\n>>\n> Would it help if I changed the checkpoint_completion_target to something\n> close to 0.7 (mine is the default 0.5)\n>\n\nProbably not. The practical difference between 0.5 and 0.7 is so small,\nthat even if it did make a difference at one place and time, it would stop\nmaking a difference again in the future for no apparent reason.\n\n\n> or raise the checkpoint_segments (it is 32 now)?\n>\n\nMaybe. If you are dirtying the same set of pages over and over again, and\nthat set of pages fits in shared_buffers, then lengthening the time between\ncheckpoints could have a good effect. Otherwise, it probably would not.\n\nYour best bet may be just to try it and see. But first, have you tried to\ntune shared_buffers?\n\nWhat are you doing? Bulk loading? Frantically updating a small number of\nrows?\n\nDo you have pg_xlog on a separate IO controller from the rest of the data?\n Do you have battery-backed/nonvolatile write cache?\n\nCheers,\n\nJeff\n\nOn Saturday, April 13, 2013, Rodrigo Barboza wrote:\nOn Sat, Apr 13, 2013 at 4:51 PM, Jeff Janes <[email protected]> wrote:\nOn Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza <[email protected]> wrote:\n\n\n\nI was receiving warning messages of pgstat timeout.I started looking for the solution for this problem and this doubt came out.\nBut it doesn't seem to be the problem.\nUsually that just means that you are dirtying pages faster than your hard drives can write them out, leading to IO contention. Sometimes you can do something about this by changing the work-load to dirty pages in more compact regions, or by changing the configuration. Often the only effective thing you can do is buy more hard-drives.\nI meant congestion, not contention. \n\n\nWould it help if I changed the checkpoint_completion_target to something close to 0.7 (mine is the default 0.5)\nProbably not. The practical difference between 0.5 and 0.7 is so small, that even if it did make a difference at one place and time, it would stop making a difference again in the future for no apparent reason.\n or raise the checkpoint_segments (it is 32 now)?\nMaybe. If you are dirtying the same set of pages over and over again, and that set of pages fits in shared_buffers, then lengthening the time between checkpoints could have a good effect. Otherwise, it probably would not. \nYour best bet may be just to try it and see. But first, have you tried to tune shared_buffers?What are you doing? Bulk loading? Frantically updating a small number of rows?\nDo you have pg_xlog on a separate IO controller from the rest of the data? Do you have battery-backed/nonvolatile write cache?Cheers,Jeff",
"msg_date": "Sun, 14 Apr 2013 15:34:40 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Segment best size"
},
{
"msg_contents": "On Sun, Apr 14, 2013 at 7:34 PM, Jeff Janes <[email protected]> wrote:\n\n> On Saturday, April 13, 2013, Rodrigo Barboza wrote:\n>\n>>\n>>\n>>\n>> On Sat, Apr 13, 2013 at 4:51 PM, Jeff Janes <[email protected]> wrote:\n>>\n>>> On Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza <\n>>> [email protected]> wrote:\n>>>\n>>>>\n>>>>\n>>>> I was receiving warning messages of pgstat timeout.\n>>>> I started looking for the solution for this problem and this doubt came\n>>>> out.\n>>>> But it doesn't seem to be the problem.\n>>>>\n>>>\n>>> Usually that just means that you are dirtying pages faster than your\n>>> hard drives can write them out, leading to IO contention. Sometimes you\n>>> can do something about this by changing the work-load to dirty pages in\n>>> more compact regions, or by changing the configuration. Often the only\n>>> effective thing you can do is buy more hard-drives.\n>>>\n>>\n> I meant congestion, not contention.\n>\n>\n>>\n>>>\n>> Would it help if I changed the checkpoint_completion_target to something\n>> close to 0.7 (mine is the default 0.5)\n>>\n>\n> Probably not. The practical difference between 0.5 and 0.7 is so small,\n> that even if it did make a difference at one place and time, it would stop\n> making a difference again in the future for no apparent reason.\n>\n>\n>> or raise the checkpoint_segments (it is 32 now)?\n>>\n>\n> Maybe. If you are dirtying the same set of pages over and over again, and\n> that set of pages fits in shared_buffers, then lengthening the time between\n> checkpoints could have a good effect. Otherwise, it probably would not.\n>\n> Your best bet may be just to try it and see. But first, have you tried to\n> tune shared_buffers?\n>\n> What are you doing? Bulk loading? Frantically updating a small number of\n> rows?\n>\n> Do you have pg_xlog on a separate IO controller from the rest of the data?\n> Do you have battery-backed/nonvolatile write cache?\n>\n> Cheers,\n>\n> Jeff\n>\n\n\nMy shared_buffer is tuned for 25% of memory.\nMy pg_xlog is in the same device.\nIt is not bulk loading,\nThe battery I have to check, because the machine is not with me.\n\nOn Sun, Apr 14, 2013 at 7:34 PM, Jeff Janes <[email protected]> wrote:\nOn Saturday, April 13, 2013, Rodrigo Barboza wrote:\n\n\n\nOn Sat, Apr 13, 2013 at 4:51 PM, Jeff Janes <[email protected]> wrote:\nOn Sat, Apr 13, 2013 at 10:29 AM, Rodrigo Barboza <[email protected]> wrote:\n\n\n\nI was receiving warning messages of pgstat timeout.I started looking for the solution for this problem and this doubt came out.\nBut it doesn't seem to be the problem.\nUsually that just means that you are dirtying pages faster than your hard drives can write them out, leading to IO contention. Sometimes you can do something about this by changing the work-load to dirty pages in more compact regions, or by changing the configuration. Often the only effective thing you can do is buy more hard-drives.\nI meant congestion, not contention. \n\n\nWould it help if I changed the checkpoint_completion_target to something close to 0.7 (mine is the default 0.5)\nProbably not. The practical difference between 0.5 and 0.7 is so small, that even if it did make a difference at one place and time, it would stop making a difference again in the future for no apparent reason.\n\n or raise the checkpoint_segments (it is 32 now)?\nMaybe. If you are dirtying the same set of pages over and over again, and that set of pages fits in shared_buffers, then lengthening the time between checkpoints could have a good effect. Otherwise, it probably would not. \nYour best bet may be just to try it and see. But first, have you tried to tune shared_buffers?What are you doing? Bulk loading? Frantically updating a small number of rows?\nDo you have pg_xlog on a separate IO controller from the rest of the data? Do you have battery-backed/nonvolatile write cache?Cheers,Jeff\nMy shared_buffer is tuned for 25% of memory.My pg_xlog is in the same device.It is not bulk loading,The battery I have to check, because the machine is not with me.",
"msg_date": "Sun, 14 Apr 2013 20:29:42 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Segment best size"
}
] |
[
{
"msg_contents": "Hi,\n\n(I have put the contents of this email at http://pastebin.com/6h49zrYE\nfor better readibility)\n\nI have been circling around the following problem for a few hours and\nI am wondering if anyone has any suggestions about why the the second\nquery is taking so much longer than the first.\nBoth tables have similar row counts, all tables are partitions, though\nthe queries are querying the partitions directly, not the parent\ntables.\nThe 'DELIVERED' status comprises almost 100% of the rows in the\nnotification_xxx tables.\nANALYZE was run right before each query was issued.\nThis test database is on EC2. PG 9.2.4, CentOS release 5.9, 70GB RAM.\nGUC changes listed at the bottom. Are there any other details I can supply?\nNote that the same behaviour occurred on 9.2.3, but I cannot say for\nsure for whether it occurred on 9.2.2.\nDropping the random_page_cost to 1 helped the slow query (after it is\ncached, drops down to about 1.7s).\nSetting enable_mergejoin to false causes the slow query to choose a\nNested Loop and shaves off about 30% of the execution time.\n\n\n\n-- Fast query, first run, uncached, ORDER BY nearly-unique tstamp_utc\nexplain (analyze, buffers)\nSELECT e.id,e.device,e.engine,e.form,e.device_type_id,ncbs.delivery_tstamp_utc\nFROM event_20130404 e\nINNER JOIN notifications_20130404 ncbs ON (e.id = ncbs.event_id)\nWHERE TRUE\nAND e.date_utc = '2013-04-04'\nAND e.tstamp_utc BETWEEN '2013-04-04 10:00:00' AND '2013-04-04 18:00:00'\nAND e.org_id = 216471\nAND ncbs.event_creation_tstamp_utc BETWEEN '2013-04-04 10:00:00' AND\n'2013-04-04 18:00:00'\nAND ncbs.status = 'DELIVERED'\nORDER BY e.tstamp_utc desc\noffset 10000 limit 100;\n\n Limit (cost=29639.24..29935.63 rows=100 width=50) (actual\ntime=159.103..167.637 rows=100 loops=1)\n Buffers: shared hit=46588 read=1055\n I/O Timings: read=63.416\n -> Nested Loop (cost=0.00..1882500.73 rows=635138 width=50)\n(actual time=0.240..159.693 rows=10100 loops=1)\n Buffers: shared hit=46588 read=1055\n I/O Timings: read=63.416\n -> Index Scan Backward using\nevent_20130404_tstamp_utc_org_id_idx on event_20130404 e\n(cost=0.00..315412.34 rows=1876877 width=42) (actual\ntime=0.129..61.981 rows=10100 loops=1)\n Index Cond: ((tstamp_utc >= '2013-04-04\n10:00:00'::timestamp without time zone) AND (tstamp_utc <= '2013-04-04\n18:00:00'::timestamp without time zone) AND (org_id = 216471))\n Filter: (date_utc = '2013-04-04'::date)\n Buffers: shared hit=6309 read=833\n I/O Timings: read=40.380\n -> Index Scan using\nnotifications_20130404_event_id_org_id_pk on notifications_20130404\nncbs (cost=0.00..0.82 rows=1 width=16) (actual time=0.006..0.006\nrows=1 loops=10100)\n Index Cond: (event_id = e.id)\n Filter: ((event_creation_tstamp_utc >= '2013-04-04\n10:00:00'::timestamp without time zone) AND (event_creation_tstamp_utc\n<= '2013-04-04 18:00:00'::timestamp without time zone) AND (status =\n'DELIVERED'::text))\n Buffers: shared hit=40279 read=222\n I/O Timings: read=23.036\n Total runtime: 170.436 ms\n(17 rows)\n\n\n\n\n-- Slow query, uncached, ORDER BY primary key\nexplain (analyze,buffers)\nSELECT e.device,e.id,e.engine,e.form,e.device_type_id,ncbs.delivery_tstamp_utc\nFROM event_20130405 e\nINNER JOIN notifications_20130405 ncbs ON (e.id = ncbs.event_id)\nWHERE TRUE\nAND e.date_utc = '2013-04-05'\nAND e.tstamp_utc BETWEEN '2013-04-05 10:00:00' AND '2013-04-05 18:00:00'\nAND e.org_id = 216471\nAND ncbs.event_creation_tstamp_utc BETWEEN '2013-04-05 10:00:00' AND\n'2013-04-05 18:00:00'\nAND ncbs.status = 'DELIVERED'\nORDER BY e.id desc\nOFFSET 10000 LIMIT 100;\n\n Limit (cost=14305.61..14447.99 rows=100 width=42) (actual\ntime=13949.028..13950.353 rows=100 loops=1)\n Buffers: shared hit=2215465 read=141796 written=12128\n I/O Timings: read=11308.341 write=116.673\n -> Merge Join (cost=67.31..887063.35 rows=622965 width=42)\n(actual time=13761.135..13933.879 rows=10100 loops=1)\n Merge Cond: (e.id = ncbs.event_id)\n Buffers: shared hit=2215465 read=141796 written=12128\n I/O Timings: read=11308.341 write=116.673\n -> Index Scan Backward using event_20130405_id_pk on\nevent_20130405 e (cost=0.00..612732.34 rows=1889715 width=34) (actual\ntime=2076.812..2111.274 rows=10100 loops=1)\n Filter: ((tstamp_utc >= '2013-04-05\n10:00:00'::timestamp without time zone) AND (tstamp_utc <= '2013-04-05\n18:00:00'::timestamp without time zone) AND (date_utc =\n'2013-04-05'::date) AND (org_id = 216471))\n Rows Removed by Filter: 1621564\n Buffers: shared hit=1176391 read=113344 written=11918\n I/O Timings: read=774.769 write=113.095\n -> Index Scan Backward using\nnotifications_20130405_event_id_org_id_pk on notifications_20130405\nncbs (cost=0.00..258079.61 rows=2135847 width=16) (actual\ntime=11684.312..11784.738 rows=11653 loops=1)\n Filter: ((event_creation_tstamp_utc >= '2013-04-05\n10:00:00'::timestamp without time zone) AND (event_creation_tstamp_utc\n<= '2013-04-05 18:00:00'::timestamp without time zone) AND (status =\n'DELIVERED'::text))\n Rows Removed by Filter: 1620011\n Buffers: shared hit=1039074 read=28452 written=210\n I/O Timings: read=10533.572 write=3.578\n Total runtime: 13950.458 ms\n(18 rows)\n\n\nselect * from admin.changed_guc_settings ;\n name |\n current_setting\n---------------------------------+---------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.2.4 on\nx86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red\nHat 4.1.2-52), 64-bit\n autovacuum_analyze_scale_factor | 0.01\n autovacuum_naptime | 20s\n autovacuum_vacuum_cost_delay | 5ms\n autovacuum_vacuum_cost_limit | 1000\n autovacuum_vacuum_scale_factor | 0.01\n autovacuum_vacuum_threshold | 50\n bgwriter_lru_maxpages | 400\n checkpoint_completion_target | 0.8\n checkpoint_segments | 64\n client_encoding | UTF8\n commit_delay | 10\n effective_cache_size | 54GB\n effective_io_concurrency | 4\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n log_autovacuum_min_duration | 0\n log_filename | postgresql-%Y-%m-%d.log\n log_line_prefix | %t [%p]: [%l-1] (user=%u) (rhost=%h) (db=%d)\n log_lock_waits | on\n log_min_duration_statement | 100ms\n log_temp_files | 0\n logging_collector | on\n maintenance_work_mem | 6GB\n max_connections | 200\n max_locks_per_transaction | 100\n max_stack_depth | 6MB\n pg_stat_statements.max | 1000\n pg_stat_statements.track | all\n random_page_cost | 1\n search_path | \"$user\",public,admin\n server_encoding | UTF8\n shared_buffers | 6GB\n shared_preload_libraries | pg_stat_statements\n track_activity_query_size | 6144\n track_functions | pl\n track_io_timing | on\n vacuum_cost_limit | 2000\n wal_buffers | 16MB\n wal_keep_segments | 100\n wal_level | minimal\n work_mem | 900MB\n\n\n\n\ntest01=# \\d event_20130406\n Table \"public.event_20130406\"\n Column | Type |\n Modifiers\n-----------------------+-----------------------------+-------------------------------------------------------------------\n id | bigint | not null\ndefault nextval('event_id_seq'::regclass)\n org_id | integer | not null\n date_utc | date | not null\ndefault (timezone('UTC'::text, clock_timestamp()))::date\n tstamp_utc | timestamp without time zone | not null\ndefault timezone('UTC'::text, clock_timestamp())\n device | text | not null\n provider | text | not null\n form | text |\n engine | text |\n user_service_provider | text |\n sender | text |\n properties | json |\n delivery_windows | json |\n device_type_id | smallint |\nIndexes:\n \"event_20130406_id_pk\" PRIMARY KEY, btree (id) WITH (fillfactor=100)\n \"event_20130406_date_utc_idx\" btree (date_utc) WITH (fillfactor=100)\n \"event_20130406_device_type_id_idx\" btree (device_type_id) WITH\n(fillfactor=100)\n \"event_20130406_org_id_idx\" btree (org_id) WITH (fillfactor=100)\n \"event_20130406_tstamp_utc_org_id_idx\" btree (tstamp_utc, org_id)\nWITH (fillfactor=100)\nCheck constraints:\n \"event_20130406_date_utc_check\" CHECK (date_utc = '2013-04-06'::date)\n\n\ntest01=# \\d notification_counts_by_status_20130406\n Table \"public.notification_counts_by_status_20130406\"\n Column | Type | Modifiers\n---------------------------+-----------------------------+-----------\n event_id | bigint | not null\n org_id | integer | not null\n status | text | not null\n tstamp_utc | timestamp without time zone | not null\n event_creation_tstamp_utc | timestamp without time zone | not null\n delivery_tstamp_utc | timestamp without time zone |\nIndexes:\n \"notification_counts_by_status_20130406_event_id_org_id_pk\"\nPRIMARY KEY, btree (event_id, org_id)\n \"ncbs_20120406_delivery_tstamp_utc_pidx\" btree\n(delivery_tstamp_utc) WHERE delivery_tstamp_utc IS NOT NULL\n \"notification_counts_by_status_20130406_ectu_idx\" btree\n(event_creation_tstamp_utc) WITH (fillfactor=66)\n \"notification_counts_by_status_20130406_org_id_idx\" btree (org_id)\nWITH (fillfactor=98)\n \"notification_counts_by_status_20130406_status_idx\" btree (status)\nWITH (fillfactor=66)\nCheck constraints:\n \"ncbs_20130406_ectu_ck\" CHECK (event_creation_tstamp_utc >=\n'2013-04-06 00:00:00'::timestamp without time zone AND\nevent_creation_tstamp_utc < '2013-04-07 00:00:00'::timestamp without\ntime zone)\nInherits: notification_counts_by_status\n\n\n-- Row counts, in case that is useful\ntest01=# select count(*) from event_20130404:\n6,354,940\ntest01=# select count(*) from event_20130404 where tstamp_utc BETWEEN\n'2013-04-04 10:00:00' AND '2013-04-04 18:00:00';\n2,157,002\n\ntest01=# select count(*) from event_20130405 where tstamp_utc BETWEEN\n'2013-04-05 10:00:00' AND '2013-04-05 18:00:00';\n2,160,009\ntest01=# select count(*) from event_20130405;\n6,479,278\n\n\ntest01=# select count(*) from notification_counts_by_status_20130404\nwhere event_creation_tstamp_utc BETWEEN '2013-04-04 10:00:00' AND\n'2013-04-04 18:00:00';\n2,157,000\n\ntest01=# select count(*) from notification_counts_by_status_20130405\nwhere event_creation_tstamp_utc BETWEEN '2013-04-05 10:00:00' AND\n'2013-04-05 18:00:00';\n2,160,009\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Apr 2013 09:51:02 -0700",
"msg_from": "brick pglists <[email protected]>",
"msg_from_op": true,
"msg_subject": "Changing ORDER BY column slows query dramatically"
},
{
"msg_contents": "On 04/12/2013 11:51 AM, brick pglists wrote:\n\n> -> Index Scan Backward using event_20130405_id_pk on\n> event_20130405 e (cost=0.00..612732.34 rows=1889715 width=34) (actual\n> time=2076.812..2111.274 rows=10100 loops=1)\n > Filter: ((tstamp_utc >= '2013-04-05\n > 10:00:00'::timestamp without time zone) AND (tstamp_utc <= '2013-04-05\n > 18:00:00'::timestamp without time zone) AND (date_utc =\n > '2013-04-05'::date) AND (org_id = 216471))\n\nThis right here is your culprit. The planner thinks it'll be faster to \ngrab 100 rows by scanning your primary key backwards and filtering out \nthe matching utc timestamps and other criteria.\n\nSince it doesn't show up in your GUC list, you should probably increase \nyour default_statistics_target to 400 or more, analyze, and try again. \nThe heuristics for the dates aren't complete enough, so it thinks there \nare few matches. If that doesn't work and you want a quick, but ugly fix \nfor this, you can create the following index:\n\nCREATE INDEX event_20130406_id_desc_tstamp_utc_idx\n ON event_20130406 (id DESC, tstamp_utc);\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Apr 2013 14:59:03 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Changing ORDER BY column slows query dramatically"
},
{
"msg_contents": "Hi Shaun,\n\nOn Fri, Apr 12, 2013 at 12:59 PM, Shaun Thomas <[email protected]> wrote:\n> On 04/12/2013 11:51 AM, brick pglists wrote:\n>\n> Since it doesn't show up in your GUC list, you should probably increase your\n> default_statistics_target to 400 or more, analyze, and try again. The\n> heuristics for the dates aren't complete enough, so it thinks there are few\n> matches. If that doesn't work and you want a quick, but ugly fix for this,\n> you can create the following index:\n>\n> CREATE INDEX event_20130406_id_desc_tstamp_utc_idx\n> ON event_20130406 (id DESC, tstamp_utc);\n\n\nThanks for your suggestions. Bumping up the default_statistics_target\nseveral times all the way to 4000 (ANALYZEd each time) did not help,\nhowever, adding the index you suggested helped with that query. It is\nstill over a magnitude slower than the version that sorts by\ntstamp_utc, but it's a start. I created a similar index (CREATE INDEX\nevent_20130406_id_desc_tstamp_utc_desc_idx ON event_20130406 (id DESC,\ntstamp_utc DESC)) where both columns were sorted DESCm and given the\nchoice between those two, it chose the latter.\nSetting enable_mergejoin to false results in a plan much closer to the\noriginal fast one, and further changing cpu_tuple_cost up to 1 results\nin a query about 3x slower than the original fast one.\n\nThe ORDER BY e.id query, with the new index, enable_mergejoin\ndisabled, and cpu_tuple_cost bumped up to 1:\n\n Limit (cost=125386.16..126640.02 rows=100 width=42) (actual\ntime=220.807..221.864 rows=100 loops=1)\n Buffers: shared hit=49171 read=6770\n I/O Timings: read=44.980\n -> Nested Loop (cost=0.00..7734858.92 rows=616883 width=42)\n(actual time=110.718..213.923 rows=10100 loops=1)\n Buffers: shared hit=49171 read=6770\n I/O Timings: read=44.980\n -> Index Scan using\nevent_20130406_id_desc_tstamp_utc_desc_idx on event_20130406 e\n(cost=0.00..2503426.81 rows=1851068 width=34) (actual\ntime=110.690..139.001 rows=10100 loops=1)\n Index Cond: ((tstamp_utc >= '2013-04-06\n10:00:00'::timestamp without time zone) AND (tstamp_utc <= '2013-04-06\n18:00:00'::timestamp without time zone))\n Filter: ((date_utc = '2013-04-06'::date) AND (org_id = 216471))\n Rows Removed by Filter: 1554\n Buffers: shared hit=8647 read=6770\n I/O Timings: read=44.980\n -> Index Scan using\nnotification_counts_by_status_20130406_event_id_org_id_pk on\nnotification_counts_by_status_20130406 ncbs (cost=0.00..1.83 rows=1\nwidth=16) (actual time=0.003..0.004 rows=1 loops=10100)\n Index Cond: (event_id = e.id)\n Filter: ((event_creation_tstamp_utc >= '2013-04-06\n10:00:00'::timestamp without time zone) AND (event_creation_tstamp_utc\n<= '2013-04-06 18:00:00'::timestamp without time zone) AND (status =\n'DELIVERED'::text))\n Buffers: shared hit=40524\n Total runtime: 222.127 ms\n(17 rows)\n\nStill not at the ~90ms from the \"ORDER BY e.tstamp_utc DESC\" version,\nbut not too bad. Now I need to figure out how I can get the best plan\nchoice without monkeying around with enable_mergejoin and changing\ncpu_tuple_cost too much.\n\nIf any more suggestions are forthcoming, I am all ears!\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Apr 2013 13:38:00 -0700",
"msg_from": "brick pglists <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Changing ORDER BY column slows query dramatically"
}
] |
[
{
"msg_contents": "Hi,\n\nWhat is the recommended swap space for postgres or 9.0.11 or 9.2.3 on\nlinux(3.0.58-0.6)\nRAM on the hox is 32GB.\n\nThank you\nNik\n\nHi,What is the recommended swap space for postgres or 9.0.11 or 9.2.3 on linux(3.0.58-0.6)RAM on the hox is 32GB.Thank you\nNik",
"msg_date": "Fri, 12 Apr 2013 14:05:40 -0700",
"msg_from": "Nik Tek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Recommended Swap space"
},
{
"msg_contents": "My experience is that you're best off either with a swap space that matches\nor exceeds physical memory, or none at all. The linux kernel swap daemon\n(kswapd) gets confused and behaves badly with small swap spaces, especially\nthe more memory you have. Good news is that hard drives are cheap and 32G\nisn't a lot of memory so I'd set it up for 32G + a tiny bit. Once a\nmachine gets to larger memory sizes (128G and more) I just turn off swap.\n\nThe really messed up bit is that the problems with the kswapd won't show up\nfor weeks, months, or sometimes even longer. The symptoms of a kswapd\nproblem is that swap is mostly full, but there's LOTS of free memory /\nkernel cache, and the kswapd is busily swapping in / out. Page faults\nskyrocket and the problem can last for a few minutes or a few hours, then\nclear up and not occur again for a long time.\n\ntl;dr: Either physical memory + a little or none.\n\n\nOn Fri, Apr 12, 2013 at 3:05 PM, Nik Tek <[email protected]> wrote:\n\n> Hi,\n>\n> What is the recommended swap space for postgres or 9.0.11 or 9.2.3 on\n> linux(3.0.58-0.6)\n> RAM on the hox is 32GB.\n>\n> Thank you\n> Nik\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\nMy experience is that you're best off either with a swap space that matches or exceeds physical memory, or none at all. The linux kernel swap daemon (kswapd) gets confused and behaves badly with small swap spaces, especially the more memory you have. Good news is that hard drives are cheap and 32G isn't a lot of memory so I'd set it up for 32G + a tiny bit. Once a machine gets to larger memory sizes (128G and more) I just turn off swap.\nThe really messed up bit is that the problems with the kswapd won't show up for weeks, months, or sometimes even longer. The symptoms of a kswapd problem is that swap is mostly full, but there's LOTS of free memory / kernel cache, and the kswapd is busily swapping in / out. Page faults skyrocket and the problem can last for a few minutes or a few hours, then clear up and not occur again for a long time.\ntl;dr: Either physical memory + a little or none.On Fri, Apr 12, 2013 at 3:05 PM, Nik Tek <[email protected]> wrote:\nHi,What is the recommended swap space for postgres or 9.0.11 or 9.2.3 on linux(3.0.58-0.6)\nRAM on the hox is 32GB.Thank you\nNik\n-- To understand recursion, one must first understand recursion.",
"msg_date": "Fri, 12 Apr 2013 15:15:49 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended Swap space"
},
{
"msg_contents": "On Fri, Apr 12, 2013 at 6:15 PM, Scott Marlowe <[email protected]> wrote:\n> The really messed up bit is that the problems with the kswapd won't show up\n> for weeks, months, or sometimes even longer. The symptoms of a kswapd\n> problem is that swap is mostly full, but there's LOTS of free memory /\n> kernel cache, and the kswapd is busily swapping in / out. Page faults\n> skyrocket and the problem can last for a few minutes or a few hours, then\n> clear up and not occur again for a long time.\n\nIsn't that a NUMA issue?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Apr 2013 18:21:44 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Recommended Swap space"
},
{
"msg_contents": "No, I've had it happen with NUMA turned off. The NUMA issues are with\nzone_reclaim_mode. If you have it set to 1 (default on a lot of big\nmachines) it can cause serious problems with performance as well.\n\n\nOn Fri, Apr 12, 2013 at 3:21 PM, Claudio Freire <[email protected]>wrote:\n\n> On Fri, Apr 12, 2013 at 6:15 PM, Scott Marlowe <[email protected]>\n> wrote:\n> > The really messed up bit is that the problems with the kswapd won't show\n> up\n> > for weeks, months, or sometimes even longer. The symptoms of a kswapd\n> > problem is that swap is mostly full, but there's LOTS of free memory /\n> > kernel cache, and the kswapd is busily swapping in / out. Page faults\n> > skyrocket and the problem can last for a few minutes or a few hours, then\n> > clear up and not occur again for a long time.\n>\n> Isn't that a NUMA issue?\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\nNo, I've had it happen with NUMA turned off. The NUMA issues are with zone_reclaim_mode. If you have it set to 1 (default on a lot of big machines) it can cause serious problems with performance as well.\nOn Fri, Apr 12, 2013 at 3:21 PM, Claudio Freire <[email protected]> wrote:\nOn Fri, Apr 12, 2013 at 6:15 PM, Scott Marlowe <[email protected]> wrote:\n\n> The really messed up bit is that the problems with the kswapd won't show up\n> for weeks, months, or sometimes even longer. The symptoms of a kswapd\n> problem is that swap is mostly full, but there's LOTS of free memory /\n> kernel cache, and the kswapd is busily swapping in / out. Page faults\n> skyrocket and the problem can last for a few minutes or a few hours, then\n> clear up and not occur again for a long time.\n\nIsn't that a NUMA issue?\n-- To understand recursion, one must first understand recursion.",
"msg_date": "Fri, 12 Apr 2013 17:06:15 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Recommended Swap space"
}
] |
[
{
"msg_contents": "Hi guys.\nI read in some forums that this parameter is better to set it to something\nlike 0.7.\nMine is default (0.5) and sometime it logs the message:\nWARNING: pgstat wait timeout\n\nCould there be any relation between this parameter and the warning message?\nIs it safe to set it to 0.7?\n\nHi guys.I read in some forums that this parameter is better to set it to something like 0.7.Mine is default (0.5) and sometime it logs the message:WARNING: pgstat wait timeout\nCould there be any relation between this parameter and the warning message?\nIs it safe to set it to 0.7?",
"msg_date": "Fri, 12 Apr 2013 18:40:43 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Default value checkpoint_completion_target"
}
] |
[
{
"msg_contents": "I was investigating some performance issues and stumbled upon this behavior:\n\ncreate table main_table (i serial primary key, data varchar, ord int);\ncreate view main_view_order as select m.i, m.data, m.ord from main_table m order by m.i desc;\n\ninsert into main_table select i, i::text, i/10 from generate_series(1,1000000) i;\n\ncreate index ix_ord on main_table(ord);\nanalyze main_table;\n\nexplain analyze select * from main_view_order m where m.ord >= 5000 and m.ord <= 5500 limit 10;\n\nLimit (cost=0.00..69.01 rows=10 width=14) (actual time=330.943..330.951 rows=10 loops=1)\n -> Index Scan Backward using main_table_pkey on main_table m (cost=0.00..36389.36 rows=5281 width=14) (actual time=330.937..330.940 rows=10 loops=1)\n Filter: ((ord >= 5000) AND (ord <= 5500))\nTotal runtime: 330.975 ms\n\nI havent found it on TODO or in archives so I'm wondering if this is a known behavior.\n\nRegards,\nRikard\n\n-- \nRikard Pavelic\nhttp://www.ngs.hr/\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 13 Apr 2013 10:25:49 +0200",
"msg_from": "Rikard Pavelic <[email protected]>",
"msg_from_op": true,
"msg_subject": "limit is sometimes not pushed in view with order"
},
{
"msg_contents": "Rikard Pavelic <[email protected]> writes:\n> I was investigating some performance issues and stumbled upon this behavior:\n> create table main_table (i serial primary key, data varchar, ord int);\n> create view main_view_order as select m.i, m.data, m.ord from main_table m order by m.i desc;\n\n> insert into main_table select i, i::text, i/10 from generate_series(1,1000000) i;\n\n> create index ix_ord on main_table(ord);\n> analyze main_table;\n\n> explain analyze select * from main_view_order m where m.ord >= 5000 and m.ord <= 5500 limit 10;\n\n> Limit (cost=0.00..69.01 rows=10 width=14) (actual time=330.943..330.951 rows=10 loops=1)\n> -> Index Scan Backward using main_table_pkey on main_table m (cost=0.00..36389.36 rows=5281 width=14) (actual time=330.937..330.940 rows=10 loops=1)\n> Filter: ((ord >= 5000) AND (ord <= 5500))\n> Total runtime: 330.975 ms\n\n> I havent found it on TODO or in archives so I'm wondering if this is a known behavior.\n\nThere is nothing particularly wrong with that plan, or at least I'd not\nrecommend holding your breath waiting for it to get better.\n\nGiven this scenario, there are two possible (index-based) plans. The one\nabove scans the pkey index in decreasing order, reports out only the rows\nsatisfying the \"ord\" condition, and stops as soon as it has 10 rows.\nThe only other alternative is to scan the ord index to collect the 5000\nor so rows satisfying the \"ord\" condition, sort them all by \"i\", and\nthen throw away 4990 of them and return just the first 10.\n\nThe planner realizes that about 1/200th of the table satisfies the \"ord\"\ncondition, so it estimates that the first plan will require scanning\nabout 2000 entries in the pkey index to get 10 results. So that looks\nsignificantly cheaper than the other plan, which would require 5000\nindex fetches, not to mention a sort step.\n\nNow, in this artificial test case, the cost estimate is wrong because\n\"i\" and \"ord\" are perfectly correlated and all the desired rows are\nquite far down the descending-i index scan; so the chosen plan actually\nhas to scan a lot more than 2000 index entries. In a more realistic\ncase that plan would probably work noticeably better. However, the\nplanner has no statistics that would tell it about the degree of order\ncorrelation of the two columns, so it's not able to find that out.\n\nHaving said all that, neither of these plan choices are exactly ideal;\nthe planner is basically reduced to having to guess which one will\nsuck less. You might try experimenting with two-column indexes on\n(i, ord) or (ord, i) to give the planner some other cards to play.\nI'm not sure how much that would help in this exact query type, but for\nexample the common case of \"where x = constant order by y\" is perfectly\nmatched to a btree index on (x, y).\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 13 Apr 2013 11:21:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit is sometimes not pushed in view with order"
},
{
"msg_contents": "On 13/04/13 18:25, Rikard Pavelic wrote:\n> I was investigating some performance issues and stumbled upon this behavior:\n> \n> create table main_table (i serial primary key, data varchar, ord int);\n> create view main_view_order as select m.i, m.data, m.ord from main_table m order by m.i desc;\n> \n> insert into main_table select i, i::text, i/10 from generate_series(1,1000000) i;\n> \n> create index ix_ord on main_table(ord);\n> analyze main_table;\n> \n> explain analyze select * from main_view_order m where m.ord >= 5000 and m.ord <= 5500 limit 10;\n> \n> Limit (cost=0.00..69.01 rows=10 width=14) (actual time=330.943..330.951 rows=10 loops=1)\n> -> Index Scan Backward using main_table_pkey on main_table m (cost=0.00..36389.36 rows=5281 width=14) (actual time=330.937..330.940 rows=10 loops=1)\n> Filter: ((ord >= 5000) AND (ord <= 5500))\n> Total runtime: 330.975 ms\n> \n> I havent found it on TODO or in archives so I'm wondering if this is a known behavior.\n> \n> Regards,\n> Rikard\n> \nHi,\nDisregard the VIEW for the moment. (its not the issue here).\n\nI wasn't able to get much better than a LIMIT of around 50 after a\nSET STATISTICS 1000 on the PK column (i).\n\njulian=# explain analyse select * from main_table m where m.ord >= 5000\nand m.ord <= 5500 order by m.i desc limit 49;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=352.73..352.85 rows=49 width=14) (actual time=3.215..3.227\nrows=49 loops=1)\n -> Sort (cost=352.73..365.23 rows=5000 width=14) (actual\ntime=3.213..3.217 rows=49 loops=1)\n Sort Key: i\n Sort Method: top-N heapsort Memory: 27kB\n -> Index Scan using ix_ord on main_table m (cost=0.00..187.36\nrows=5000 width=14) (actual time=0.025..1.479 rows=5010 loops=1)\n Index Cond: ((ord >= 5000) AND (ord <= 5500))\n Total runtime: 3.252 ms\n\n\nHowever, at LIMIT 48 it goes bad:\n\njulian=# explain analyse select * from main_table m where m.ord >= 5000\nand m.ord <= 5500 order by m.i desc limit 48;\n\nQUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..349.34 rows=48 width=14) (actual\ntime=280.158..280.179 rows=48 loops=1)\n -> Index Scan Backward using main_table_pkey on main_table m\n(cost=0.00..36389.36 rows=5000 width=14) (actual time=280.156..280.172\nrows=48 loops=1)\n Filter: ((ord >= 5000) AND (ord <= 5500))\n Rows Removed by Filter: 944991\n Total runtime: 280.206 ms\n\n\n49 rows is pretty good IMO.\nBut others might want to give some tips, because I don't use LIMIT much.\nYou might want to consider using CURSORs - Which in this example\nwould cache the 49 rows and pass the rows you limit (FETCH) more\nefficiently.\n\nRegards,\nJules.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 14 Apr 2013 01:36:48 +1000",
"msg_from": "Julian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit is sometimes not pushed in view with order"
},
{
"msg_contents": "On Sat, 13 Apr 2013 11:21:19 -0400\nTom Lane <[email protected]> wrote:\n\n> The planner realizes that about 1/200th of the table satisfies the\n> \"ord\" condition, so it estimates that the first plan will require\n> scanning about 2000 entries in the pkey index to get 10 results. So\n> that looks significantly cheaper than the other plan, which would\n> require 5000 index fetches, not to mention a sort step.\n\nIs it really realizing that?\nMaybe I oversimplified my example.\nI was trying to show that planner is not using limit information when\nit should. For example if there were joins, he would do joins first\nusing filter estimate and at the end apply limit.\n\n> \n> Now, in this artificial test case, the cost estimate is wrong because\n> \"i\" and \"ord\" are perfectly correlated and all the desired rows are\n> quite far down the descending-i index scan; so the chosen plan\n> actually has to scan a lot more than 2000 index entries. In a more\n> realistic case that plan would probably work noticeably better.\n> However, the planner has no statistics that would tell it about the\n> degree of order correlation of the two columns, so it's not able to\n> find that out.\n> \n\nThats actually pretty realistic scenario (if you change ord int to\ncreated timestamp), but yeah it's probably best to create composite\nindex for that scenario.\n\nBut, look at this standard ERP example:\n\ncreate table main_table (id serial primary key, data varchar, created timestamptz);\ncreate table detail_table (main_id int references main_table, index int, data varchar, primary key(main_id, index));\n\ncreate view main_view as \nselect m.id, m.data, m.created, d.details\nfrom main_table m \nleft join \n(\n\tselect main_id, array_agg(d order by index) details\n\tfrom detail_table d\n\tgroup by main_id\n) d on m.id = d.main_id\norder by m.created desc;\n\ninsert into main_table select i, i::text, now() + (i/10 || ' sec')::interval from generate_series(1,100001) i;\ninsert into detail_table select i/5 + 1, i%5, i::text from generate_series(1,500000) i;\n\ncreate index ix_created on main_table(created desc);\nanalyze main_table;\nanalyze detail_table;\n\nexplain analyze select * from main_view m where m.created >= now() + '1 min'::interval and m.created <= now() + '5 min'::interval limit 10;\n\n\nLimit (cost=0.01..22913.81 rows=10 width=49) (actual time=35.548..920.034 rows=10 loops=1)\n -> Nested Loop Left Join (cost=0.01..5879654.29 rows=2566 width=49) (actual time=35.546..920.028 rows=10 loops=1)\n Join Filter: (m.id = d.main_id)\n Rows Removed by Join Filter: 904978\n -> Index Scan using ix_created on main_table m (cost=0.01..98.79 rows=2521 width=17) (actual time=0.059..0.103 rows=10 loops=1)\n Index Cond: ((created >= (now() + '00:01:00'::interval)) AND (created <= (now() + '00:05:00'::interval)))\n -> Materialize (cost=0.00..25343.93 rows=101773 width=36) (actual time=0.012..84.037 rows=90499 loops=10)\n -> Subquery Scan on d (cost=0.00..24039.07 rows=101773 width=36) (actual time=0.036..630.576 rows=100001 loops=1)\n -> GroupAggregate (cost=0.00..23021.34 rows=101773 width=46) (actual time=0.035..619.240 rows=100001 loops=1)\n -> Index Scan using detail_table_pkey on detail_table d (cost=0.00..19249.17 rows=500000 width=46) (actual time=0.012..154.834 rows=500000 loops=1)\nTotal runtime: 922.272 ms\n\nWhile one could argue that optimizer doesn't know to optimize left\njoin with group by its primary key, you can replace that join with\nsome other joins (ie left join to another table pk) and the same\nbehavior will be displayed (joining all tables and applying limit at\nthe end).\nThat's why I asked if fence for pushing limit is a known behavior.\n\nSince this behavior is really important to me, I will spend a lot of\ntime looking at Postgres to try and improve this.\n\nRegards,\nRikard\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 13 Apr 2013 20:08:16 +0200",
"msg_from": "Rikard Pavelic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: limit is sometimes not pushed in view with order"
},
{
"msg_contents": "On Sat, 13 Apr 2013 20:08:16 +0200\nRikard Pavelic <[email protected]> wrote:\n\n> While one could argue that optimizer doesn't know to optimize left\n> join with group by its primary key, you can replace that join with\n> some other joins (ie left join to another table pk) and the same\n> behavior will be displayed (joining all tables and applying limit at\n> the end).\n> That's why I asked if fence for pushing limit is a known behavior.\n\nWhile I can work around that problem by pushing left join in select\nsubquery this doesn't solve all problems since limit is not pushed\ndown on nontrivial queries.\n\nThis is probably the best example:\n\ncreate table big_table(i serial primary key, delay int);\ncreate function some_calculation(i int) returns int as $$ \nbegin\n perform pg_sleep(i);\n return i*i;\nend $$ language plpgsql stable cost 100000;\n\ncreate view big_view \nas select t.i, some_calculation(t.delay) as calc, s.delay as d2\nfrom big_table t\nleft join big_table s on t.i = s.i + s.i\norder by t.i asc;\n\ninsert into big_table select i, i%5 from generate_series(1, 100000) i;\nanalyze big_table;\n\nexplain analyze select * from big_view v where i >= 100 and i <= 105 limit 1;\n\nLimit (cost=3201.63..3201.64 rows=1 width=12) (actual\ntime=10017.471..10017.471 rows=1 loops=1) -> Sort (cost=3201.63..3201.64 rows=5 width=12) (actual time=10017.469..10017.469 rows=1 loops=1)\n Sort Key: t.i\n Sort Method: top-N heapsort Memory: 25kB\n -> Hash Right Join (cost=8.52..3201.57 rows=5 width=12) (actual time=0.078..10017.436 rows=6 loops=1)\n Hash Cond: ((s.i + s.i) = t.i)\n -> Seq Scan on big_table s (cost=0.00..1443.00 rows=100000 width=8) (actual time=0.005..6.294 rows=100000 loops=1)\n -> Hash (cost=8.46..8.46 rows=5 width=8) (actual time=0.012..0.012 rows=6 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using big_table_pkey on big_table t (cost=0.00..8.46 rows=5 width=8) (actual time=0.007..0.008 rows=6 loops=1)\n Index Cond: ((i >= 100) AND (i <= 105))\nTotal runtime: 10017.514 ms\n\nexplain analyze select * from big_view v where i >= 100 and i <= 10005 limit 1;\n\nLimit (cost=0.00..2391.22 rows=1 width=12) (actual time=0.088..0.088 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..23780547.26 rows=9945 width=12) (actual time=0.087..0.087 rows=1 loops=1)\n Join Filter: (t.i = (s.i + s.i))\n Rows Removed by Join Filter: 49\n -> Index Scan using big_table_pkey on big_table t (cost=0.00..359.26 rows=9945 width=8) (actual time=0.014..0.014 rows=1 loops=1)\n Index Cond: ((i >= 100) AND (i <= 10005))\n -> Materialize (cost=0.00..2334.00 rows=100000 width=8) (actual time=0.009..0.020 rows=50 loops=1)\n -> Seq Scan on big_table s (cost=0.00..1443.00 rows=100000 width=8) (actual time=0.005..0.010 rows=50 loops=1)\nTotal runtime: 0.122 ms\n\nexplain analyze select * from big_view v where i >= 100 and i <= 10005 limit 10;\n\ntakes too long...\n\nTo me this looks like it should be fixable if limit is applied before\nall targets are evaluated.\nIf I remove the left join from the view, Postgres works as expected,\nso I guess it already knows how to apply limit before selects, but this\nis probably missing for subqueries where targets are pulled out of it.\n\nMaybe this is a problem since you would probably need closures to pull\nof a general solution, but there are plenty of use cases without group\nby that would benefit from applying limit before evaluating targets\nthat are used only in topmost result. \n\nSo, I was wondering if this is a known problem and is there any\ninterest in tackling it?\n\nRegards,\nRikard\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 14 Apr 2013 12:34:14 +0200",
"msg_from": "Rikard Pavelic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: limit is sometimes not pushed in view with order"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.