threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hey everyone,\n\nI was wondering if anyone has found a way to get pg_basebackup to be... \nfaster. Currently we do our backups something like this:\n\ntar -c -I pigz -f /db/backup_yyyy-mm-dd.tar.gz -C /db pgdata\n\nWhich basically calls pigz to do parallel compression because with RAIDs \nand ioDrives all over the place, it's the compression that's the \nbottleneck. Otherwise, only one of our 24 CPUs is actually doing anything.\n\nI can't seem to find anything like this for pg_basebackup. It just uses \nits internal compression method. I could see this being the case for \npg_dump, but pg_basebackup just produces regular tar.gz files. Is there \nany way to either fake a parallel compression here, or should this be a \nfeature request for pg_basebackup?\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Tue, 12 Jun 2012 09:54:28 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of pg_basebackup"
},
{
"msg_contents": "On Tue, Jun 12, 2012 at 4:54 PM, Shaun Thomas <[email protected]> wrote:\n> Hey everyone,\n>\n> I was wondering if anyone has found a way to get pg_basebackup to be...\n> faster. Currently we do our backups something like this:\n>\n> tar -c -I pigz -f /db/backup_yyyy-mm-dd.tar.gz -C /db pgdata\n>\n> Which basically calls pigz to do parallel compression because with RAIDs and\n> ioDrives all over the place, it's the compression that's the bottleneck.\n> Otherwise, only one of our 24 CPUs is actually doing anything.\n>\n> I can't seem to find anything like this for pg_basebackup. It just uses its\n> internal compression method. I could see this being the case for pg_dump,\n> but pg_basebackup just produces regular tar.gz files. Is there any way to\n> either fake a parallel compression here, or should this be a feature request\n> for pg_basebackup?\n\nIf you have a single tablespace you can have pg_basebackup write the\noutput to stdout and then pipe that through pigz.\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n",
"msg_date": "Tue, 12 Jun 2012 16:57:38 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of pg_basebackup"
},
{
"msg_contents": "On 06/12/2012 09:57 AM, Magnus Hagander wrote:\n\n> If you have a single tablespace you can have pg_basebackup write the\n> output to stdout and then pipe that through pigz.\n\nYeah, I saw that. Unfortunately we have tiered storage and hence two \ntablespaces. :(\n\nTo be fair, my current process cheats by following symlinks. I haven't \nyet modified it to explicitly handle tablespaces. I'll probably just \nsteal the directory method pg_basebackup uses until it can natively call \nan external compression program.\n\nThanks, Magnus!\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Tue, 12 Jun 2012 10:00:35 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of pg_basebackup"
}
] |
[
{
"msg_contents": "Hello,\n\nI have around 1000 schema in database, Each schema having similar data \nstructure with different data\nEach schema has few tables which never updates (Read only table) and \nother tables rewrites almost everyday so I prefer to TRUNCATE those \ntables and restores with new data\n\nNow facing issue on high CPU & IO on database primarily of Stats \nCollector & Vacuuming, size of statfile is almost 28MB and when I \nmanually vacuum analyze complete database it takes almost 90 minutes \nthough auto vacuum is configured\n\nRestoring dump on each schema may minor data variations\nExecuting SQL statements on schema are few , Affecting less than 50 \ntouple / day\n\nMy Questions :\n\nIncreasing Maintainace_Work_Mem improves auto / manual vacuum \nperformance ? If it improves will it require more IO / CPU resource ?\nIf I stops Stats Collector process & auto vaccuming & Execute manual \nvaccum based on schema restoration with major change what performance \nparameter I need to consider ? (Restoring data has vary few changes)\nIs Vacuuming & Stats required here for Metadata for improving \nperformance ? (Table structures remain same)\n\nAny more on this which can help to reduce IO without affecting major \nperformance\n\nregards,\nSiddharth\n",
"msg_date": "Thu, 14 Jun 2012 20:45:58 +0530",
"msg_from": "Siddharth Shah <[email protected]>",
"msg_from_op": true,
"msg_subject": "High CPU Usage"
},
{
"msg_contents": "On Thu, Jun 14, 2012 at 11:15 AM, Siddharth Shah\n<[email protected]> wrote:\n> I have around 1000 schema in database, Each schema having similar data\n> structure with different data\n> Each schema has few tables which never updates (Read only table) and other\n> tables rewrites almost everyday so I prefer to TRUNCATE those tables and\n> restores with new data\n>\n> Now facing issue on high CPU & IO on database primarily of Stats Collector &\n> Vacuuming, size of statfile is almost 28MB\n\nHow many tables do you have across all the schemas?\n\n> and when I manually vacuum\n> analyze complete database it takes almost 90 minutes though auto vacuum is\n> configured\n\nThere's no real reason to run vacuum analyze manually if you have\nautovacuum configured.\n\n> Restoring dump on each schema may minor data variations\n> Executing SQL statements on schema are few , Affecting less than 50 touple /\n> day\n>\n> My Questions :\n>\n> Increasing Maintainace_Work_Mem improves auto / manual vacuum performance ?\n\nIt can, but mostly if there are a lot of updates or deletes. If the\ntables aren't changing much it isn't going to do anything.\n\n> If it improves will it require more IO / CPU resource ?\n> If I stops Stats Collector process & auto vaccuming & Execute manual vaccum\n> based on schema restoration with major change what performance parameter I\n> need to consider ? (Restoring data has vary few changes)\n> Is Vacuuming & Stats required here for Metadata for improving performance ?\n> (Table structures remain same)\n>\n> Any more on this which can help to reduce IO without affecting major\n> performance\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 23 Jul 2012 15:38:35 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High CPU Usage"
}
] |
[
{
"msg_contents": "I've been struggling with this issue for the last several days, and I feel\nlike I'm running into a few different issues that I don't understand. I'm\nusing postgres 9.0.8, and here's the OS I'm running this on:\ninux 2.6.18-308.4.1.el5xen #1 SMP Tue Apr 17 17:49:15 EDT 2012 x86_64\nx86_64 x86_64 GNU/Linux\n\nfrom show all:\nshared_buffers | 4GB\nwork_mem | 192MB\n maintenance_work_mem | 1GB\neffective_cache_size | 24GB\n wal_sync_method | fdatasync\nrandom_page_cost | 4\n\nMy situtation: I have an empty parent table, that has 250 partitions. Each\npartition has 4 million records (250 megs). I'm querying 5k records\ndirectly from one partition (no joins), and it's taking ~2 seconds to get\nthe results. This feels very slow to me for an indexed table of only 4\nmillion records.\n\nQuick overview of my questions::\n1. expected performance? tips on what to look into to increase performance?\n2. should multicolumn indices help?\n3. does reindex table cache the table?\n\nBelow are the tables, queries, and execution plans with my questions with\nmore detail. (Since I have 250 partitions, I can query one partition after\nthe other to ensure that I'm not pulling results form the cache)\n\nParent table:\n# \\d data\n Table \"public.data\"\n Column | Type | Modifiers\n--------------+------------------+-----------\n data_id | integer | not null\n dataset_id | integer | not null\n stat | double precision | not null\n stat_id | integer | not null\nNumber of child tables: 254 (Use \\d+ to list them.)\n\n\nChild (partition) with ~4 million records:\n\n\\d data_part_201\ngenepool_1_11=# \\d data_part_201\n Table \"public.data_part_201\"\n Column | Type | Modifiers\n--------------+------------------+-----------\n data_id | integer | not null\n dataset_id | integer | not null\n stat | double precision | not null\n stat_id | integer | not null\nIndexes:\n \"data_unq_201\" UNIQUE, btree (data_id)\n \"data_part_201_dataset_id_idx\" btree (dataset_id)\n \"data_part_201_stat_id_idx\" btree (stat_id)\nCheck constraints:\n \"data_chk_201\" CHECK (dataset_id = 201)\nInherits: data\n\nexplain analyze select data_id, dataset_id, stat from data_part_201 where\ndataset_id = 201\nand stat_id = 6 and data_id>=50544630 and data_id<=50549979;\n\n Bitmap Heap Scan on data_part_201 (cost=115.79..14230.69 rows=4383\nwidth=16) (actual time=36.103..1718.141 rows=5350 loops=1)\n Recheck Cond: ((data_id >= 50544630) AND (data_id <= 50549979))\n Filter: ((dataset_id = 201) AND (stat_id = 6))\n -> Bitmap Index Scan on data_unq_201 (cost=0.00..114.70 rows=5403\nwidth=0) (actual time=26.756..26.756 rows=5350 loops=1)\n Index Cond: ((data_id >= 50544630) AND (data_id <= 50549979))\n Total runtime: 1728.447 ms\n(6 rows)\n\nTime: 1743.535 ms\n\nQUESTION 1: you can see that the query is very simple. is this the optimal\nexecution plan? any tips on what to look into to increase performance?\n\nI then tried adding the following multi-column index:\n\"data_part_202_dataset_regionset_data_idx\" btree (dataset_id, data_id,\nstat_id)\n\nThe query now takes 27 seconds!:\nexplain analyze select data_id, dataset_id, stat from data_part_202 where\ndataset_id = 202\nand stat_id = 6 and data_id>=50544630 and data_id<=50549979;\n\n Index Scan using data_part_202_dataset_regionset_data_idx on data_part_202\n (cost=0.00..7987.83 rows=4750 width=16) (actual time=39.152..27339.401\nrows=5350 loops=1)\n Index Cond: ((dataset_id = 202) AND (data_id >= 50544630) AND (data_id\n<= 50549979) AND (stat_id = 6))\n Total runtime: 27349.091 ms\n(3 rows)\n\nQUESTION 2: why is a multicolumn index causing the query to run so much\nslower? I had expected it to increase the performance\n\n\nQUESTION 3:\nIf I do the following: reindex table data_part_204 the query now takes\n50-70 milliseconds. Is this because the table is getting cached? How do I\nknow if a particular query is coming from the cache? The reason why I think\n\"reindex table\" is caching the results, is that select count(*) from the\npartition also causes the query to be fast.\n\n(and yes, vacuum analyze on the partition makes no difference)\n\nI've been struggling with this issue for the last several days, and I feel like I'm running into a few different issues that I don't understand. I'm using postgres 9.0.8, and here's the OS I'm running this on:\ninux 2.6.18-308.4.1.el5xen #1 SMP Tue Apr 17 17:49:15 EDT 2012 x86_64 x86_64 x86_64 GNU/Linuxfrom show all:shared_buffers | 4GB work_mem | 192MB \n maintenance_work_mem | 1GB effective_cache_size | 24GB wal_sync_method | fdatasync random_page_cost | 4 \nMy situtation: I have an empty parent table, that has 250 partitions. Each partition has 4 million records (250 megs). I'm querying 5k records directly from one partition (no joins), and it's taking ~2 seconds to get the results. This feels very slow to me for an indexed table of only 4 million records. \nQuick overview of my questions::1. expected performance? tips on what to look into to increase performance?2. should multicolumn indices help?3. does reindex table cache the table?\nBelow are the tables, queries, and execution plans with my questions with more detail. (Since I have 250 partitions, I can query one partition after the other to ensure that I'm not pulling results form the cache)\nParent table:# \\d data Table \"public.data\" Column | Type | Modifiers --------------+------------------+-----------\n data_id | integer | not null dataset_id | integer | not null stat | double precision | not null stat_id | integer | not nullNumber of child tables: 254 (Use \\d+ to list them.)\nChild (partition) with ~4 million records:\\d data_part_201genepool_1_11=# \\d data_part_201 Table \"public.data_part_201\"\n Column | Type | Modifiers --------------+------------------+----------- data_id | integer | not null dataset_id | integer | not null stat | double precision | not null\n stat_id | integer | not nullIndexes: \"data_unq_201\" UNIQUE, btree (data_id) \"data_part_201_dataset_id_idx\" btree (dataset_id) \"data_part_201_stat_id_idx\" btree (stat_id)\nCheck constraints: \"data_chk_201\" CHECK (dataset_id = 201)Inherits: dataexplain analyze select data_id, dataset_id, stat from data_part_201 where dataset_id = 201\nand stat_id = 6 and data_id>=50544630 and data_id<=50549979; Bitmap Heap Scan on data_part_201 (cost=115.79..14230.69 rows=4383 width=16) (actual time=36.103..1718.141 rows=5350 loops=1)\n Recheck Cond: ((data_id >= 50544630) AND (data_id <= 50549979)) Filter: ((dataset_id = 201) AND (stat_id = 6)) -> Bitmap Index Scan on data_unq_201 (cost=0.00..114.70 rows=5403 width=0) (actual time=26.756..26.756 rows=5350 loops=1)\n Index Cond: ((data_id >= 50544630) AND (data_id <= 50549979)) Total runtime: 1728.447 ms(6 rows)Time: 1743.535 msQUESTION 1: you can see that the query is very simple. is this the optimal execution plan? any tips on what to look into to increase performance?\nI then tried adding the following multi-column index:\"data_part_202_dataset_regionset_data_idx\" btree (dataset_id, data_id, stat_id)The query now takes 27 seconds!: \nexplain analyze select data_id, dataset_id, stat from data_part_202 where dataset_id = 202and stat_id = 6 and data_id>=50544630 and data_id<=50549979; Index Scan using data_part_202_dataset_regionset_data_idx on data_part_202 (cost=0.00..7987.83 rows=4750 width=16) (actual time=39.152..27339.401 rows=5350 loops=1)\n Index Cond: ((dataset_id = 202) AND (data_id >= 50544630) AND (data_id <= 50549979) AND (stat_id = 6)) Total runtime: 27349.091 ms(3 rows)QUESTION 2: why is a multicolumn index causing the query to run so much slower? I had expected it to increase the performance\nQUESTION 3:If I do the following: reindex table data_part_204 the query now takes 50-70 milliseconds. Is this because the table is getting cached? How do I know if a particular query is coming from the cache? The reason why I think \"reindex table\" is caching the results, is that select count(*) from the partition also causes the query to be fast.\n(and yes, vacuum analyze on the partition makes no difference)",
"msg_date": "Fri, 15 Jun 2012 09:17:39 -0700",
"msg_from": "Anish Kejariwal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Expected performance of querying 5k records from 4 million records?"
},
{
"msg_contents": "Anish,\n\n> I've been struggling with this issue for the last several days, and I feel\n> like I'm running into a few different issues that I don't understand. I'm\n> using postgres 9.0.8, and here's the OS I'm running this on:\n> inux 2.6.18-308.4.1.el5xen #1 SMP Tue Apr 17 17:49:15 EDT 2012 x86_64\n> x86_64 x86_64 GNU/Linux\n\nRAM? What does your disk support look like? (disks especially are\nrelevant, see below).\n\n> explain analyze select data_id, dataset_id, stat from data_part_201 where\n> dataset_id = 201\n> and stat_id = 6 and data_id>=50544630 and data_id<=50549979;\n> \n> Bitmap Heap Scan on data_part_201 (cost=115.79..14230.69 rows=4383\n> width=16) (actual time=36.103..1718.141 rows=5350 loops=1)\n> Recheck Cond: ((data_id >= 50544630) AND (data_id <= 50549979))\n> Filter: ((dataset_id = 201) AND (stat_id = 6))\n> -> Bitmap Index Scan on data_unq_201 (cost=0.00..114.70 rows=5403\n> width=0) (actual time=26.756..26.756 rows=5350 loops=1)\n> Index Cond: ((data_id >= 50544630) AND (data_id <= 50549979))\n> Total runtime: 1728.447 ms\n> (6 rows)\n\nI've seen extremely slow Bitmap Heap Scans like this before. There's a\nfew things which can cause them in my experience:\n\n1) Table is on disk, and random access to disk is very slow for some reason.\n\n2) Recheck condition is computationally expensive (unlikely here)\n\n3) Index is very bloated and needs reindexing (again, unlikely because\nthe initial Bitmap Index Scan is quite fast).\n\nTo test the above: run the exact same query several times in a row.\nDoes it get dramatically faster on the 2nd and successive runs?\n\n> Index Scan using data_part_202_dataset_regionset_data_idx on data_part_202\n> (cost=0.00..7987.83 rows=4750 width=16) (actual time=39.152..27339.401\n> rows=5350 loops=1)\n> Index Cond: ((dataset_id = 202) AND (data_id >= 50544630) AND (data_id\n> <= 50549979) AND (stat_id = 6))\n> Total runtime: 27349.091 ms\n> (3 rows)\n\nI'll point out that you're now querying a different partition than you\ndid above.\n\nAgain, this would point to random access to the underlying partition\nbeing very slow.\n\n> QUESTION 3:\n> If I do the following: reindex table data_part_204 the query now takes\n> 50-70 milliseconds. Is this because the table is getting cached? How do I\n> know if a particular query is coming from the cache? The reason why I think\n> \"reindex table\" is caching the results, is that select count(*) from the\n> partition also causes the query to be fast.\n\nYes, it's most likely because the table is being cached. To test this,\nrun one of the slow query versions above repeatedly.\n\nThings to investigate:\n\n1) Is there some reason why random access on your disks would be\nunusually slow? iSCSI, cheap NAS/SAN, RAID 5+0, running on OSX or\nWindows, etc.?\n\n2) Is there a possibility that the partitions or indexes involved might\nbe unusually bloated, such as a large number of historical updates to\nindexed columns? If so, does a CLUSTER on one partition make the issue\ngo away?\n\n3) Test your database using PostgreSQL 9.2 Beta2. Do the new index-only\nscans solve this issue?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 15 Jun 2012 10:44:45 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Expected performance of querying 5k records from 4\n\tmillion records?"
},
{
"msg_contents": "On Fri, Jun 15, 2012 at 9:17 AM, Anish Kejariwal <[email protected]> wrote:\n>\n> Below are the tables, queries, and execution plans with my questions with\n> more detail. (Since I have 250 partitions, I can query one partition after\n> the other to ensure that I'm not pulling results form the cache)\n\nDoesn't that explain why it is slow? If you have 15000 rpm drives and\neach row is in a different block and uncached, it would take 20\nseconds to read them all in. You are getting 10 times better than\nthat, either due to caching or because your rows are clustered, or\nbecause effective_io_concurrency is doing its thing.\n\n>\n> explain analyze select data_id, dataset_id, stat from data_part_201 where\n> dataset_id = 201\n> and stat_id = 6 and data_id>=50544630 and data_id<=50549979;\n\nWhat does \"explain (analyze, buffers)\" show?\n\n\n> QUESTION 1: you can see that the query is very simple. is this the optimal\n> execution plan? any tips on what to look into to increase performance?\n>\n> I then tried adding the following multi-column index:\n> \"data_part_202_dataset_regionset_data_idx\" btree (dataset_id, data_id,\n> stat_id)\n\nSince you query stat_id for equality and data_id for range, you should\nprobably reverse the order of those columns in the index.\n\n\n>\n> QUESTION 3:\n> If I do the following: reindex table data_part_204 the query now takes\n> 50-70 milliseconds. Is this because the table is getting cached? How do I\n> know if a particular query is coming from the cache?\n\nUsing explain (analyze, buffers) will show you if it is coming from\nthe shared_buffers cache.\n\nIt is harder to see if it is coming from the file system cache. If\nthe server is mostly idle other than your stuff, you can run vmstat\nand see how much physical IO is caused by your activity.\n\nCheers,\n\nJeff\n",
"msg_date": "Fri, 15 Jun 2012 11:20:00 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Expected performance of querying 5k records from 4\n\tmillion records?"
},
{
"msg_contents": "Thanks for the help, Jeff and Josh. It looks reclustering the multi-column\nindex might solve things. For my particular query, because I'm getting a\nrange of records back, it makes sense that reclustering will benefit me if\nI have a slow disk even if I had expected that the indices would be\nsufficient . I now need to make sure that the speed up I'm seeing is not\nbecause things have been cached.\n\nThat being said, here's what I have:\n2CPUs, 12 physical cores, hyperthreaded (24 virtual cores), 2.67Ghz\n96G RAM, 80G available to dom0\nCentOS 5.8, Xen\n3Gbps SATA (7200 RPM, Hitachi ActiveStar Enterprise Class)\n\nSo, I have lots of RAM, but not necessarily the fastest disk.\n\ndefault_statistics_target = 50 # pgtune wizard 2011-03-16\nmaintenance_work_mem = 1GB # pgtune wizard 2011-03-16\nconstraint_exclusion = on # pgtune wizard 2011-03-16\ncheckpoint_completion_target = 0.9 # pgtune wizard 2011-03-16\neffective_cache_size = 24GB # pgtune wizard 2011-03-16\nwork_mem = 192MB # pgtune wizard 2011-03-16\nwal_buffers = 8MB # pgtune wizard 2011-03-16\ncheckpoint_segments = 128 # pgtune wizard 2011-03-16, amended by am,\n30may2011\nshared_buffers = 4GB # pgtune wizard 2011-03-16\nmax_connections = 100 # pgtune wizard 2011-03-16: 80, bumped up to 100\nmax_locks_per_transaction = 1000\n\nI didn't know about explain (analyze,buffers). Very cool. So, based on\nyour advice, I ran it and here's what I found:\n\n1st time I ran the query:\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on data_part_213 (cost=113.14..13725.77 rows=4189\nwidth=16) (actual time=69.807..2763.174 rows=5350 loops=1)\n Recheck Cond: ((data_id >= 50544630) AND (data_id <= 50549979))\n Filter: ((dataset_id = 213) AND (stat_id = 6))\n Buffers: shared read=4820\n -> Bitmap Index Scan on data_unq_213 (cost=0.00..112.09 rows=5142\nwidth=0) (actual time=51.918..51.918 rows=5350 loops=1)\n Index Cond: ((data_id >= 50544630) AND (data_id <= 50549979))\n Buffers: shared read=19\n Total runtime: 2773.099 ms\n(8 rows)\n\nthe second time I run the query it's very fast, since all the buffered read\ncounts have turned into hit counts showing I'm reading from cache (as I\nexpected):\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on data_part_213 (cost=113.14..13725.77 rows=4189\nwidth=16) (actual time=1.661..14.376 rows=5350 loops=1)\n Recheck Cond: ((data_id >= 50544630) AND (data_id <= 50549979))\n Filter: ((dataset_id = 213) AND (stat_id = 6))\n Buffers: shared hit=4819\n -> Bitmap Index Scan on data_unq_213 (cost=0.00..112.09 rows=5142\nwidth=0) (actual time=0.879..0.879 rows=5350 loops=1)\n Index Cond: ((data_id >= 50544630) AND (data_id <= 50549979))\n Buffers: shared hit=18\n Total runtime: 20.232 ms\n(8 rows)\n\n\n\nNext, I tried reclustering a partition with the multicolumn-index. the big\nthings is that the read count has dropped dramatically!\n Index Scan using data_part_214_dataset_stat_data_idx on data_part_214\n (cost=0.00..7223.05 rows=4265 width=16) (actual time=0.093..7.251\nrows=5350 loops=1)\n Index Cond: ((dataset_id = 214) AND (data_id >= 50544630) AND (data_id\n<= 50549979) AND (stat_id = 6))\n Buffers: shared hit=45 read=24\n Total runtime: 12.929 ms\n(4 rows)\n\n\nsecond time:\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using data_part_214_dataset_stat_data_idx on data_part_214\n (cost=0.00..7223.05 rows=4265 width=16) (actual time=0.378..7.696\nrows=5350 loops=1)\n Index Cond: ((dataset_id = 214) AND (data_id >= 50544630) AND (data_id\n<= 50549979) AND (stat_id = 6))\n Buffers: shared hit=68\n Total runtime: 13.511 ms\n(4 rows)\n\nSo, it looks like clustering the index appropriately fixes things! Also,\nI'll recreate the index switching the order to (dataset_id, stat_id,data_id)\n\nthanks!\n\nOn Fri, Jun 15, 2012 at 11:20 AM, Jeff Janes <[email protected]> wrote:\n\n> On Fri, Jun 15, 2012 at 9:17 AM, Anish Kejariwal <[email protected]>\n> wrote:\n> >\n> > Below are the tables, queries, and execution plans with my questions with\n> > more detail. (Since I have 250 partitions, I can query one partition\n> after\n> > the other to ensure that I'm not pulling results form the cache)\n>\n> Doesn't that explain why it is slow? If you have 15000 rpm drives and\n> each row is in a different block and uncached, it would take 20\n> seconds to read them all in. You are getting 10 times better than\n> that, either due to caching or because your rows are clustered, or\n> because effective_io_concurrency is doing its thing.\n>\n> >\n> > explain analyze select data_id, dataset_id, stat from data_part_201 where\n> > dataset_id = 201\n> > and stat_id = 6 and data_id>=50544630 and data_id<=50549979;\n>\n> What does \"explain (analyze, buffers)\" show?\n>\n>\n> > QUESTION 1: you can see that the query is very simple. is this the\n> optimal\n> > execution plan? any tips on what to look into to increase performance?\n> >\n> > I then tried adding the following multi-column index:\n> > \"data_part_202_dataset_regionset_data_idx\" btree (dataset_id, data_id,\n> > stat_id)\n>\n> Since you query stat_id for equality and data_id for range, you should\n> probably reverse the order of those columns in the index.\n>\n>\n> >\n> > QUESTION 3:\n> > If I do the following: reindex table data_part_204 the query now takes\n> > 50-70 milliseconds. Is this because the table is getting cached? How\n> do I\n> > know if a particular query is coming from the cache?\n>\n> Using explain (analyze, buffers) will show you if it is coming from\n> the shared_buffers cache.\n>\n> It is harder to see if it is coming from the file system cache. If\n> the server is mostly idle other than your stuff, you can run vmstat\n> and see how much physical IO is caused by your activity.\n>\n> Cheers,\n>\n> Jeff\n>\n\nThanks for the help, Jeff and Josh. It looks reclustering the multi-column index might solve things. For my particular query, because I'm getting a range of records back, it makes sense that reclustering will benefit me if I have a slow disk even if I had expected that the indices would be sufficient . I now need to make sure that the speed up I'm seeing is not because things have been cached.\nThat being said, here's what I have:2CPUs, 12 physical cores, hyperthreaded (24 virtual cores), 2.67Ghz96G RAM, 80G available to dom0CentOS 5.8, Xen3Gbps SATA (7200 RPM, Hitachi ActiveStar Enterprise Class)\nSo, I have lots of RAM, but not necessarily the fastest disk.default_statistics_target = 50 # pgtune wizard 2011-03-16maintenance_work_mem = 1GB # pgtune wizard 2011-03-16\nconstraint_exclusion = on # pgtune wizard 2011-03-16checkpoint_completion_target = 0.9 # pgtune wizard 2011-03-16effective_cache_size = 24GB # pgtune wizard 2011-03-16work_mem = 192MB # pgtune wizard 2011-03-16\nwal_buffers = 8MB # pgtune wizard 2011-03-16checkpoint_segments = 128 # pgtune wizard 2011-03-16, amended by am, 30may2011shared_buffers = 4GB # pgtune wizard 2011-03-16max_connections = 100 # pgtune wizard 2011-03-16: 80, bumped up to 100\nmax_locks_per_transaction = 1000I didn't know about explain (analyze,buffers). Very cool. So, based on your advice, I ran it and here's what I found:1st time I ran the query:\n QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on data_part_213 (cost=113.14..13725.77 rows=4189 width=16) (actual time=69.807..2763.174 rows=5350 loops=1) Recheck Cond: ((data_id >= 50544630) AND (data_id <= 50549979))\n Filter: ((dataset_id = 213) AND (stat_id = 6)) Buffers: shared read=4820 -> Bitmap Index Scan on data_unq_213 (cost=0.00..112.09 rows=5142 width=0) (actual time=51.918..51.918 rows=5350 loops=1)\n Index Cond: ((data_id >= 50544630) AND (data_id <= 50549979)) Buffers: shared read=19 Total runtime: 2773.099 ms(8 rows)the second time I run the query it's very fast, since all the buffered read counts have turned into hit counts showing I'm reading from cache (as I expected):\n QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on data_part_213 (cost=113.14..13725.77 rows=4189 width=16) (actual time=1.661..14.376 rows=5350 loops=1) Recheck Cond: ((data_id >= 50544630) AND (data_id <= 50549979))\n\n Filter: ((dataset_id = 213) AND (stat_id = 6)) Buffers: shared hit=4819 -> Bitmap Index Scan on data_unq_213 (cost=0.00..112.09 rows=5142 width=0) (actual time=0.879..0.879 rows=5350 loops=1)\n Index Cond: ((data_id >= 50544630) AND (data_id <= 50549979)) Buffers: shared hit=18 Total runtime: 20.232 ms(8 rows)\nNext, I tried reclustering a partition with the multicolumn-index. the big things is that the read count has dropped dramatically! Index Scan using data_part_214_dataset_stat_data_idx on data_part_214 (cost=0.00..7223.05 rows=4265 width=16) (actual time=0.093..7.251 rows=5350 loops=1)\n Index Cond: ((dataset_id = 214) AND (data_id >= 50544630) AND (data_id <= 50549979) AND (stat_id = 6)) Buffers: shared hit=45 read=24 Total runtime: 12.929 ms(4 rows)\nsecond time:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using data_part_214_dataset_stat_data_idx on data_part_214 (cost=0.00..7223.05 rows=4265 width=16) (actual time=0.378..7.696 rows=5350 loops=1) Index Cond: ((dataset_id = 214) AND (data_id >= 50544630) AND (data_id <= 50549979) AND (stat_id = 6))\n Buffers: shared hit=68 Total runtime: 13.511 ms(4 rows)So, it looks like clustering the index appropriately fixes things! Also, I'll recreate the index switching the order to (dataset_id, stat_id,data_id)\nthanks!On Fri, Jun 15, 2012 at 11:20 AM, Jeff Janes <[email protected]> wrote:\nOn Fri, Jun 15, 2012 at 9:17 AM, Anish Kejariwal <[email protected]> wrote:\n>\n> Below are the tables, queries, and execution plans with my questions with\n> more detail. (Since I have 250 partitions, I can query one partition after\n> the other to ensure that I'm not pulling results form the cache)\n\nDoesn't that explain why it is slow? If you have 15000 rpm drives and\neach row is in a different block and uncached, it would take 20\nseconds to read them all in. You are getting 10 times better than\nthat, either due to caching or because your rows are clustered, or\nbecause effective_io_concurrency is doing its thing.\n\n>\n> explain analyze select data_id, dataset_id, stat from data_part_201 where\n> dataset_id = 201\n> and stat_id = 6 and data_id>=50544630 and data_id<=50549979;\n\nWhat does \"explain (analyze, buffers)\" show?\n\n\n> QUESTION 1: you can see that the query is very simple. is this the optimal\n> execution plan? any tips on what to look into to increase performance?\n>\n> I then tried adding the following multi-column index:\n> \"data_part_202_dataset_regionset_data_idx\" btree (dataset_id, data_id,\n> stat_id)\n\nSince you query stat_id for equality and data_id for range, you should\nprobably reverse the order of those columns in the index.\n\n\n>\n> QUESTION 3:\n> If I do the following: reindex table data_part_204 the query now takes\n> 50-70 milliseconds. Is this because the table is getting cached? How do I\n> know if a particular query is coming from the cache?\n\nUsing explain (analyze, buffers) will show you if it is coming from\nthe shared_buffers cache.\n\nIt is harder to see if it is coming from the file system cache. If\nthe server is mostly idle other than your stuff, you can run vmstat\nand see how much physical IO is caused by your activity.\n\nCheers,\n\nJeff",
"msg_date": "Mon, 18 Jun 2012 09:39:31 -0700",
"msg_from": "Anish Kejariwal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Expected performance of querying 5k records from 4\n\tmillion records?"
},
{
"msg_contents": "On Mon, Jun 18, 2012 at 9:39 AM, Anish Kejariwal <[email protected]> wrote:\n\n>\n> So, it looks like clustering the index appropriately fixes things! Also,\n> I'll recreate the index switching the order to (dataset_id, stat_id,data_id)\n>\n> Just keep in mind that clustering is a one-time operation. Inserts and\nupdates will change the order of records in the table, so you'll need to\nre-cluster periodically to keep performance high if there are a lot of\ninserts and updates into the tables. I didn't re-read the thread, but I\nseem recall a partitioned table, so assuming you are partitioning in a\nmanner which keeps the number of partitions that are actively being\ninserted/updated on to a minimum, you only need to cluster the active\npartitions, which isn't usually terribly painful. Also, if you are bulk\nloading data (and not creating random spaces in the table by deleting and\nupdating), you can potentially order the data on the way into the table to\navoid the need to cluster repeatedly.\n\n--sam\n\nOn Mon, Jun 18, 2012 at 9:39 AM, Anish Kejariwal <[email protected]> wrote:\nSo, it looks like clustering the index appropriately fixes things! Also, I'll recreate the index switching the order to (dataset_id, stat_id,data_id)\nJust keep in mind that clustering is a one-time operation. Inserts and updates will change the order of records in the table, so you'll need to re-cluster periodically to keep performance high if there are a lot of inserts and updates into the tables. I didn't re-read the thread, but I seem recall a partitioned table, so assuming you are partitioning in a manner which keeps the number of partitions that are actively being inserted/updated on to a minimum, you only need to cluster the active partitions, which isn't usually terribly painful. Also, if you are bulk loading data (and not creating random spaces in the table by deleting and updating), you can potentially order the data on the way into the table to avoid the need to cluster repeatedly.\n--sam",
"msg_date": "Mon, 18 Jun 2012 09:49:39 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Expected performance of querying 5k records from 4\n\tmillion records?"
},
{
"msg_contents": "On 6/18/12 9:39 AM, Anish Kejariwal wrote:\n> Thanks for the help, Jeff and Josh. It looks reclustering the multi-column\n> index might solve things. For my particular query, because I'm getting a\n> range of records back, it makes sense that reclustering will benefit me if\n> I have a slow disk even if I had expected that the indices would be\n> sufficient . I now need to make sure that the speed up I'm seeing is not\n> because things have been cached.\n\nWell, other than that your performance is as expected because of your\nmuch-larger-than-RAM database and your relatively slow disk.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n",
"msg_date": "Mon, 18 Jun 2012 12:08:37 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Expected performance of querying 5k records from 4\n\tmillion records?"
},
{
"msg_contents": "Given a baseline postgresql.conf config and a couple DL580 40 core/256GB memory I noticed a large over head for pgbouncer, has anyone seen this before?\r\n\r\n\r\n$ pgbench -h `hostname -i` -j 32 -p 4320 -U asgprod -s 500 -c 32 -S -T 60 pgbench_500\r\nScale option ignored, using pgbench_branches table count = 500\r\nstarting vacuum...end.\r\ntransaction type: SELECT only\r\nscaling factor: 500\r\nquery mode: simple\r\nnumber of clients: 32\r\nnumber of threads: 32\r\nduration: 60 s\r\nnumber of transactions actually processed: 1743073\r\ntps = 29049.886666 (including connections establishing)\r\ntps = 29050.308194 (excluding connections establishing)\r\n\r\n$ pgbench -h `hostname -i` -j 32 -p 4310 -U asgprod -s 500 -c 32 -S -T 60 pgbench_500\r\nScale option ignored, using pgbench_branches table count = 500\r\nstarting vacuum...end.\r\ntransaction type: SELECT only\r\nscaling factor: 500\r\nquery mode: simple\r\nnumber of clients: 32\r\nnumber of threads: 32\r\nduration: 60 s\r\nnumber of transactions actually processed: 8692204\r\ntps = 144857.505107 (including connections establishing)\r\ntps = 144880.181341 (excluding connections establishing)\r\n\r\nprocessor : 39\r\nvendor_id : GenuineIntel\r\ncpu family : 6\r\nmodel : 47\r\nmodel name : Intel(R) Xeon(R) CPU E7- 4860 @ 2.27GHz\r\n\r\n\r\n\r\nThis email is confidential and subject to important disclaimers and\r\nconditions including on offers for the purchase or sale of\r\nsecurities, accuracy and completeness of information, viruses,\r\nconfidentiality, legal privilege, and legal entity disclaimers,\r\navailable at http://www.jpmorgan.com/pages/disclosures/email. \n",
"msg_date": "Tue, 19 Jun 2012 16:00:51 +0000",
"msg_from": "\"Strange, John W\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "pgbouncer - massive overhead?"
},
{
"msg_contents": "On 06/19/2012 09:00 AM, Strange, John W wrote:\n> Given a baseline postgresql.conf config and a couple DL580 40 core/256GB memory I noticed a large over head for pgbouncer, has anyone seen this before?\n>\n>\n> $ pgbench -h `hostname -i` -j 32 -p 4320 -U asgprod -s 500 -c 32 -S -T 60 pgbench_500\n> Scale option ignored, using pgbench_branches table count = 500\n> starting vacuum...end.\n> transaction type: SELECT only\n> scaling factor: 500\n> query mode: simple\n> number of clients: 32\n> number of threads: 32\n> duration: 60 s\n> number of transactions actually processed: 1743073\n> tps = 29049.886666 (including connections establishing)\n> tps = 29050.308194 (excluding connections establishing)\n>\n> $ pgbench -h `hostname -i` -j 32 -p 4310 -U asgprod -s 500 -c 32 -S -T 60 pgbench_500\n> Scale option ignored, using pgbench_branches table count = 500\n> starting vacuum...end.\n> transaction type: SELECT only\n> scaling factor: 500\n> query mode: simple\n> number of clients: 32\n> number of threads: 32\n> duration: 60 s\n> number of transactions actually processed: 8692204\n> tps = 144857.505107 (including connections establishing)\n> tps = 144880.181341 (excluding connections establishing)\n>\n> processor : 39\n> vendor_id : GenuineIntel\n> cpu family : 6\n> model : 47\n> model name : Intel(R) Xeon(R) CPU E7- 4860 @ 2.27GHz\n>\n>\nI'm very dubious that the stats are meaningful as run. Were the above \nstats generated on consecutive runs on the same machine or was the test \ndatabase fully returned to baseline between runs and the machine \nrestarted to clear cache?\n\nI doubt anyone here would trust the results of a 60-second pgbench run - \nespecially a select-only test on a server that will likely end up with \nvirtually everything ultimately in cache. Make sure each run is started \nfrom the same state and run for 30-60 minutes.\n\nStill, you *are* adding a layer between the client and the server. \nRunning the simplest of read-only queries against a fully-cached \ndatabase on a fast many-core machine is likely to emphasize any latency \nintroduced by pgbouncer. But it's also not a use-case for which \npgbouncer is intended. If you were to add -C so each query required a \nnew client connection a different picture would emerge. Same thing if \nyou had 2000 client connections of which only a handful were running \nqueries at any moment.\n\nCheers,\nSteve\n\n",
"msg_date": "Wed, 20 Jun 2012 08:27:14 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbouncer - massive overhead?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am using postgresql 9.0 and I am updating a large table and running a\nselect count(*). The update is run first and then the select. The update is\nblocking the select statement. To use the term MVCC (as seems to be done so\nmuch in this list), well it seems to be broken. MVCC should allow a select\non the same table as an update, in fact nothing at all should block a\nselect. Also for some reason, the update query seems to always get an\nExclusive Lock which doesn't make any sense to me. At most an update should\nrequire a row lock. This seems to also apply to two updates on the same\ntable in parallel.\n\nDo I seem to have this right and is there anything I can do?\n\nThanks,\n~Ben\n\nHi all,I am using postgresql 9.0 and I am updating a large table and running a select count(*). The update is run first and then the select. The update is blocking the select statement. To use the term MVCC (as seems to be done so much in this list), well it seems to be broken. MVCC should allow a select on the same table as an update, in fact nothing at all should block a select. Also for some reason, the update query seems to always get an Exclusive Lock which doesn't make any sense to me. At most an update should require a row lock. This seems to also apply to two updates on the same table in parallel. \nDo I seem to have this right and is there anything I can do?Thanks,~Ben",
"msg_date": "Fri, 15 Jun 2012 14:22:03 -0400",
"msg_from": "Benedict Holland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Update blocking a select count(*)?"
},
{
"msg_contents": "On 15 June 2012 19:22, Benedict Holland <[email protected]> wrote:\n> Do I seem to have this right and is there anything I can do?\n\nThere are a couple of maintenance operations that could block a\nselect. Do you see any AccessExclusive locks within pg_locks? That's\nthe only type of lock that will block a select statement's AccessShare\nlock.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n",
"msg_date": "Fri, 15 Jun 2012 19:32:37 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Benedict Holland <[email protected]> wrote:\n \n> I am using postgresql 9.0 and I am updating a large table\n \nUpdating as in executing an UPDATE statement, or as in using ALTER\nTABLE to update the *structure* of the table?\n \n> and running a select count(*). The update is run first and then\n> the select. The update is blocking the select statement.\n \nWhat is your evidence that there is blocking? (A long run time for\nthe select does not constitute such evidence. Nor does a longer run\ntime for the same select when it follows or is concurrent to the\nupdate than it was before.)\n \n> To use the term MVCC (as seems to be done so much in this list),\n> well it seems to be broken.\n \nThat would be very surprising, and seems unlikely.\n \n> MVCC should allow a select on the same table as an update, in fact\n> nothing at all should block a select.\n \nWell, no DML should block a select. DDL can.\n \n> Also for some reason, the update query seems to always get an\n> Exclusive Lock which doesn't make any sense to me.\n \nThere is no PostgreSQL command which acquires an EXCLUSIVE lock on a\ntable. An UPDATE will acquire a ROW EXCLUSIVE lock, which is very\ndifferent. An ALTER TABLE or TRUNCATE TABLE can acquire an ACCESS\nEXCLUSIVE lock, which is the *only* level which can block a typical\nSELECT statement.\n \n> At most an update should require a row lock. This seems to also\n> apply to two updates on the same table in parallel.\n \nYou *really* need to read this chapter in the docs:\n \nhttp://www.postgresql.org/docs/current/interactive/mvcc.html\n \nThe part about the different lock levels and what the conflicts are\nmight be of particular interest:\n \nhttp://www.postgresql.org/docs/current/interactive/explicit-locking.html#LOCKING-TABLES\n \n> Do I seem to have this right\n \nNo.\n \n> and is there anything I can do?\n \nProbably, but you haven't given us enough information to be able to\nsuggest what.\n \n-Kevin\n",
"msg_date": "Fri, 15 Jun 2012 13:41:02 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Peter Geoghegan <[email protected]> wrote:\n> Benedict Holland <[email protected]> wrote:\n>> Do I seem to have this right and is there anything I can do?\n> \n> There are a couple of maintenance operations that could block a\n> select. Do you see any AccessExclusive locks within pg_locks?\n> That's the only type of lock that will block a select statement's\n> AccessShare lock.\n \nTo check for that, see the queries on these Wiki pages:\n \nhttp://wiki.postgresql.org/wiki/Lock_Monitoring\nhttp://wiki.postgresql.org/wiki/Lock_dependency_information\n \n-Kevin\n",
"msg_date": "Fri, 15 Jun 2012 13:43:30 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Yes I actually seem to have two of them for the single update. The update I\nam running will set the value of a single column in the table without a\nwhere clause. I actually have two AccessShareLock's, two ExclusiveLock's,\nand two RowExclusiveLock's. It sort of seems like overkill for what should\nbe a copy the column to make the updates, make updates, and publish updates\nset of operations. On my select statement I have an ExclusiveLock and an\nAccessShareLock. I read the documentation on locking but this seems very\ndifferent from what I should expect.\n\nI am running an update statement without a where clause (so a full table\nupdate). This is not an alter table statement (though I am running that too\nand it is being blocked). I am looking in the SeverStatus section of\npgadmin3. There are three queries which are in green (not blocked), two\nstatements which are in red (an alter as expected and a select count(*)\nwhich are blocked by an update process).\n\nI can not tell you how many documents I have read for locks, statements\nwhich generate locks etc. I accept that this will run slowly, what pgadmin3\nis displaying to me is the described behavior.\n\nThanks,\n~Ben\n\n\n\nOn Fri, Jun 15, 2012 at 2:43 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Peter Geoghegan <[email protected]> wrote:\n> > Benedict Holland <[email protected]> wrote:\n> >> Do I seem to have this right and is there anything I can do?\n> >\n> > There are a couple of maintenance operations that could block a\n> > select. Do you see any AccessExclusive locks within pg_locks?\n> > That's the only type of lock that will block a select statement's\n> > AccessShare lock.\n>\n> To check for that, see the queries on these Wiki pages:\n>\n> http://wiki.postgresql.org/wiki/Lock_Monitoring\n> http://wiki.postgresql.org/wiki/Lock_dependency_information\n>\n> -Kevin\n>\n\nYes I actually seem to have two of them for the single update. The \nupdate I am running will set the value of a single column in the table \nwithout a where clause. I actually have two AccessShareLock's, two \nExclusiveLock's, and two RowExclusiveLock's. It sort of seems like \noverkill for what should be a copy the column to make the updates, make \nupdates, and publish updates set of operations. On my select statement I\n have an ExclusiveLock and an AccessShareLock. I read the documentation \non locking but this seems very different from what I should expect. I\n am running an update statement without a where clause (so a full table \nupdate). This is not an alter table statement (though I am running that \ntoo and it is being blocked). I am looking in the SeverStatus section of\n pgadmin3. There are three queries which are in green (not blocked), two\n statements which are in red (an alter as expected and a select count(*)\n which are blocked by an update process). I can not tell you how\n many documents I have read for locks, statements which generate locks \netc. I accept that this will run slowly, what pgadmin3 is displaying to \nme is the described behavior. Thanks,\n~BenOn Fri, Jun 15, 2012 at 2:43 PM, Kevin Grittner <[email protected]> wrote:\nPeter Geoghegan <[email protected]> wrote:\n> Benedict Holland <[email protected]> wrote:\n>> Do I seem to have this right and is there anything I can do?\n>\n> There are a couple of maintenance operations that could block a\n> select. Do you see any AccessExclusive locks within pg_locks?\n> That's the only type of lock that will block a select statement's\n> AccessShare lock.\n\nTo check for that, see the queries on these Wiki pages:\n\nhttp://wiki.postgresql.org/wiki/Lock_Monitoring\nhttp://wiki.postgresql.org/wiki/Lock_dependency_information\n\n-Kevin",
"msg_date": "Fri, 15 Jun 2012 14:46:09 -0400",
"msg_from": "Benedict Holland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Sorry about the raw text but this is what I am seeing:\n\n1736 postgres 6/39 6/39 ExclusiveLock Yes\n2012-06-15 13:36:22.997-04 insert into inspections\nselect * from inspections_1\n1736 rmv 49896 postgres 6/39 AccessShareLock Yes\n2012-06-15 13:36:22.997-04 insert into inspections\nselect * from inspections_1\n1736 rmv 33081 postgres 6/39 RowExclusiveLock Yes\n2012-06-15 13:36:22.997-04 insert into inspections\nselect * from inspections_1\n1736 rmv 33084 postgres 6/39 RowExclusiveLock Yes\n2012-06-15 13:36:22.997-04 insert into inspections\nselect * from inspections_1\n2096 postgres 8/151 ExclusiveLock Yes 2012-06-15\n10:25:08.329-04 vacuum (analyze, verbose, full)\n2096 rmv 33528 postgres 8/151 AccessExclusiveLock\nYes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose, full)\n2096 rmv 50267 postgres 8/151 AccessExclusiveLock\nYes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose, full)\n2096 postgres 8/151 8/151 ExclusiveLock Yes\n2012-06-15 10:25:08.329-04 vacuum (analyze, verbose, full)\n2844 postgres 5/27 5/27 ExclusiveLock Yes\n2012-06-15 13:50:46.417-04 select count(*) from vins\n2844 rmv 33074 postgres 5/27 AccessShareLock No\n2012-06-15 13:50:46.417-04 select count(*) from vins\n2940 postgres 2/251 2/251 ExclusiveLock Yes\n2012-06-15 13:34:53.55-04\nupdate vins\nset insp_count=vc.count\nfrom vin_counts vc\nwhere id = vc.vin_id;\n\n2940 rmv 41681 postgres 2/251 AccessShareLock Yes\n2012-06-15 13:34:53.55-04\nupdate vins\nset insp_count=vc.count\nfrom vin_counts vc\nwhere id = vc.vin_id;\n\n2940 postgres 2/251 ExclusiveLock Yes 2012-06-15\n13:34:53.55-04\nupdate vins\nset insp_count=vc.count\nfrom vin_counts vc\nwhere id = vc.vin_id;\n\n2940 rmv 41684 postgres 2/251 AccessShareLock Yes\n2012-06-15 13:34:53.55-04\nupdate vins\nset insp_count=vc.count\nfrom vin_counts vc\nwhere id = vc.vin_id;\n\n2940 rmv 50265 postgres 2/251 RowExclusiveLock Yes\n2012-06-15 13:34:53.55-04\nupdate vins\nset insp_count=vc.count\nfrom vin_counts vc\nwhere id = vc.vin_id;\n\n2940 rmv 33074 postgres 2/251 RowExclusiveLock Yes\n2012-06-15 13:34:53.55-04\nupdate vins\nset insp_count=vc.count\nfrom vin_counts vc\nwhere id = vc.vin_id;\n\n2940 rmv 33079 postgres 2/251 RowExclusiveLock Yes\n2012-06-15 13:34:53.55-04\nupdate vins\nset insp_count=vc.count\nfrom vin_counts vc\nwhere id = vc.vin_id;\n\n\n\n\nOn Fri, Jun 15, 2012 at 2:46 PM, Benedict Holland <\[email protected]> wrote:\n\n> Yes I actually seem to have two of them for the single update. The update\n> I am running will set the value of a single column in the table without a\n> where clause. I actually have two AccessShareLock's, two ExclusiveLock's,\n> and two RowExclusiveLock's. It sort of seems like overkill for what should\n> be a copy the column to make the updates, make updates, and publish updates\n> set of operations. On my select statement I have an ExclusiveLock and an\n> AccessShareLock. I read the documentation on locking but this seems very\n> different from what I should expect.\n>\n> I am running an update statement without a where clause (so a full table\n> update). This is not an alter table statement (though I am running that too\n> and it is being blocked). I am looking in the SeverStatus section of\n> pgadmin3. There are three queries which are in green (not blocked), two\n> statements which are in red (an alter as expected and a select count(*)\n> which are blocked by an update process).\n>\n> I can not tell you how many documents I have read for locks, statements\n> which generate locks etc. I accept that this will run slowly, what pgadmin3\n> is displaying to me is the described behavior.\n>\n> Thanks,\n> ~Ben\n>\n>\n>\n>\n> On Fri, Jun 15, 2012 at 2:43 PM, Kevin Grittner <\n> [email protected]> wrote:\n>\n>> Peter Geoghegan <[email protected]> wrote:\n>> > Benedict Holland <[email protected]> wrote:\n>> >> Do I seem to have this right and is there anything I can do?\n>> >\n>> > There are a couple of maintenance operations that could block a\n>> > select. Do you see any AccessExclusive locks within pg_locks?\n>> > That's the only type of lock that will block a select statement's\n>> > AccessShare lock.\n>>\n>> To check for that, see the queries on these Wiki pages:\n>>\n>> http://wiki.postgresql.org/wiki/Lock_Monitoring\n>> http://wiki.postgresql.org/wiki/Lock_dependency_information\n>>\n>> -Kevin\n>>\n>\n>\n\nSorry about the raw text but this is what I am seeing:1736 postgres 6/39 6/39 ExclusiveLock Yes 2012-06-15 13:36:22.997-04 insert into inspectionsselect * from inspections_1 \n1736 rmv 49896 postgres 6/39 AccessShareLock Yes 2012-06-15 13:36:22.997-04 insert into inspectionsselect * from inspections_1 1736 rmv 33081 postgres 6/39 RowExclusiveLock Yes 2012-06-15 13:36:22.997-04 insert into inspections\nselect * from inspections_1 1736 rmv 33084 postgres 6/39 RowExclusiveLock Yes 2012-06-15 13:36:22.997-04 insert into inspectionsselect * from inspections_1 2096 postgres 8/151 ExclusiveLock Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose, full) \n2096 rmv 33528 postgres 8/151 AccessExclusiveLock Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose, full) 2096 rmv 50267 postgres 8/151 AccessExclusiveLock Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose, full) \n2096 postgres 8/151 8/151 ExclusiveLock Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose, full) 2844 postgres 5/27 5/27 ExclusiveLock Yes 2012-06-15 13:50:46.417-04 select count(*) from vins \n2844 rmv 33074 postgres 5/27 AccessShareLock No 2012-06-15 13:50:46.417-04 select count(*) from vins 2940 postgres 2/251 2/251 ExclusiveLock Yes 2012-06-15 13:34:53.55-04 \nupdate vinsset insp_count=vc.countfrom vin_counts vcwhere id = vc.vin_id; 2940 rmv 41681 postgres 2/251 AccessShareLock Yes 2012-06-15 13:34:53.55-04 update vinsset insp_count=vc.count\nfrom vin_counts vcwhere id = vc.vin_id; 2940 postgres 2/251 ExclusiveLock Yes 2012-06-15 13:34:53.55-04 update vinsset insp_count=vc.countfrom vin_counts vcwhere id = vc.vin_id;\n 2940 rmv 41684 postgres 2/251 AccessShareLock Yes 2012-06-15 13:34:53.55-04 update vinsset insp_count=vc.countfrom vin_counts vcwhere id = vc.vin_id; 2940 rmv 50265 postgres 2/251 RowExclusiveLock Yes 2012-06-15 13:34:53.55-04 \nupdate vinsset insp_count=vc.countfrom vin_counts vcwhere id = vc.vin_id; 2940 rmv 33074 postgres 2/251 RowExclusiveLock Yes 2012-06-15 13:34:53.55-04 update vins\nset insp_count=vc.countfrom vin_counts vcwhere id = vc.vin_id; 2940 rmv 33079 postgres 2/251 RowExclusiveLock Yes 2012-06-15 13:34:53.55-04 update vinsset insp_count=vc.count\nfrom vin_counts vcwhere id = vc.vin_id; On Fri, Jun 15, 2012 at 2:46 PM, Benedict Holland <[email protected]> wrote:\nYes I actually seem to have two of them for the single update. The \nupdate I am running will set the value of a single column in the table \nwithout a where clause. I actually have two AccessShareLock's, two \nExclusiveLock's, and two RowExclusiveLock's. It sort of seems like \noverkill for what should be a copy the column to make the updates, make \nupdates, and publish updates set of operations. On my select statement I\n have an ExclusiveLock and an AccessShareLock. I read the documentation \non locking but this seems very different from what I should expect. I\n am running an update statement without a where clause (so a full table \nupdate). This is not an alter table statement (though I am running that \ntoo and it is being blocked). I am looking in the SeverStatus section of\n pgadmin3. There are three queries which are in green (not blocked), two\n statements which are in red (an alter as expected and a select count(*)\n which are blocked by an update process). I can not tell you how\n many documents I have read for locks, statements which generate locks \netc. I accept that this will run slowly, what pgadmin3 is displaying to \nme is the described behavior. Thanks,\n~BenOn Fri, Jun 15, 2012 at 2:43 PM, Kevin Grittner <[email protected]> wrote:\nPeter Geoghegan <[email protected]> wrote:\n\n> Benedict Holland <[email protected]> wrote:\n>> Do I seem to have this right and is there anything I can do?\n>\n> There are a couple of maintenance operations that could block a\n> select. Do you see any AccessExclusive locks within pg_locks?\n> That's the only type of lock that will block a select statement's\n> AccessShare lock.\n\nTo check for that, see the queries on these Wiki pages:\n\nhttp://wiki.postgresql.org/wiki/Lock_Monitoring\nhttp://wiki.postgresql.org/wiki/Lock_dependency_information\n\n-Kevin",
"msg_date": "Fri, 15 Jun 2012 14:46:52 -0400",
"msg_from": "Benedict Holland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Benedict Holland <[email protected]> wrote:\n> Sorry about the raw text but this is what I am seeing:\n> \n> [wrapped text without column headers]\n \nCould you try that as an attachment, to avoid wrapping? Also, the\ncolumn headers, and/or the query used to generate those results\nwould be helpful.\n \n-Kevin\n",
"msg_date": "Fri, 15 Jun 2012 13:54:04 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Benedict Holland <[email protected]> wrote:\n \n> 10:25:08.329-04 vacuum (analyze, verbose, full)\n> 2096 rmv 33528 postgres 8/151 \n> AccessExclusiveLock\n> Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose,\n> full)\n> 2096 rmv 50267 postgres 8/151 \n> AccessExclusiveLock\n> Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose,\n> full)\n \nYou have three VACUUM FULL commands running? VACUUM FULL is very\naggressive maintenance, which is only needed for cases of extreme\nbloat. It does lock the table against any concurrent access, since\nit is completely rewriting it.\n \nNow, if you are running UPDATE statements which affect all rows in a\ntable, you will *get* extreme bloat. You either need to do such\nupdates as a series of smaller updates with VACUUM commands in\nbetween, or schedule your aggressive maintenance for a time when it\ncan have exclusive access to the tables with minimal impact.\n \nReporting the other issues without mentioning the VACUUM FULL\nprocesses is a little bit like calling from the Titanic to mention\nthat the ship isn't going as fast as it should and neglecting to\nmention the iceberg. :-)\n \n-Kevin\n",
"msg_date": "Fri, 15 Jun 2012 14:03:35 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Sure. The last column are the series of commands to produce the outputs.\nThis is coming from pgadmin3. I should have mentioned before that this is\nrunning windows but that shouldn't matter for this particular sense I hope.\n\nThe first column is the PID, the last column is the command running. The\ndates are the start time of the operations. The YES/NO is the running state\nof the process. In the activity section the 2nd to last column is the\nprocess blocking the executing process.\n\nThanks,\n~Ben\n\nOn Fri, Jun 15, 2012 at 2:54 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Benedict Holland <[email protected]> wrote:\n> > Sorry about the raw text but this is what I am seeing:\n> >\n> > [wrapped text without column headers]\n>\n> Could you try that as an attachment, to avoid wrapping? Also, the\n> column headers, and/or the query used to generate those results\n> would be helpful.\n>\n> -Kevin\n>",
"msg_date": "Fri, 15 Jun 2012 15:04:32 -0400",
"msg_from": "Benedict Holland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Yes. I needed to do a full vacuum. Again, the database is very large. I\nbatch inserted quite a lot of data and then modified that data. The vacuum\nisn't blocking anything. It was blocking other tables (as expected) but\ncontinues to run and clean. My tables in general are around 10GB, each\nupdate seems to nearly double the size of it so I required a full vacuum.\nThe blocked statements are the select count(*) and the alter table. Both\nare blocked on the update table command. The alter table command SHOULD be\nblocked and that is fine. The select count(*) should never be blocked as\nthat is the whole point of running an MVCC operation at least to my\nunderstanding. I can even accept the use case that the select should block\nwith an Alter Table operation if data is retrieved from the table, but a\nselect count(*) only returns the number of rows and should be table space\nindependent. I also don't understand why a select count(*) requires an\nAccessShareLock. I don't understand why a select should lock anything at\nall.\n\n~Ben\n\nOn Fri, Jun 15, 2012 at 3:03 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Benedict Holland <[email protected]> wrote:\n>\n> > 10:25:08.329-04 vacuum (analyze, verbose, full)\n> > 2096 rmv 33528 postgres 8/151\n> > AccessExclusiveLock\n> > Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose,\n> > full)\n> > 2096 rmv 50267 postgres 8/151\n> > AccessExclusiveLock\n> > Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose,\n> > full)\n>\n> You have three VACUUM FULL commands running? VACUUM FULL is very\n> aggressive maintenance, which is only needed for cases of extreme\n> bloat. It does lock the table against any concurrent access, since\n> it is completely rewriting it.\n>\n> Now, if you are running UPDATE statements which affect all rows in a\n> table, you will *get* extreme bloat. You either need to do such\n> updates as a series of smaller updates with VACUUM commands in\n> between, or schedule your aggressive maintenance for a time when it\n> can have exclusive access to the tables with minimal impact.\n>\n> Reporting the other issues without mentioning the VACUUM FULL\n> processes is a little bit like calling from the Titanic to mention\n> that the ship isn't going as fast as it should and neglecting to\n> mention the iceberg. :-)\n>\n> -Kevin\n>\n\nYes. I needed to do a full vacuum. Again, the database is very large. I batch inserted quite a lot of data and then modified that data. The vacuum isn't blocking anything. It was blocking other tables (as expected) but continues to run and clean. My tables in general are around 10GB, each update seems to nearly double the size of it so I required a full vacuum. The blocked statements are the select count(*) and the alter table. Both are blocked on the update table command. The alter table command SHOULD be blocked and that is fine. The select count(*) should never be blocked as that is the whole point of running an MVCC operation at least to my understanding. I can even accept the use case that the select should block with an Alter Table operation if data is retrieved from the table, but a select count(*) only returns the number of rows and should be table space independent. I also don't understand why a select count(*) requires an AccessShareLock. I don't understand why a select should lock anything at all. \n~BenOn Fri, Jun 15, 2012 at 3:03 PM, Kevin Grittner <[email protected]> wrote:\nBenedict Holland <[email protected]> wrote:\n\n> 10:25:08.329-04 vacuum (analyze, verbose, full)\n> 2096 rmv 33528 postgres 8/151\n> AccessExclusiveLock\n> Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose,\n> full)\n> 2096 rmv 50267 postgres 8/151\n> AccessExclusiveLock\n> Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose,\n> full)\n\nYou have three VACUUM FULL commands running? VACUUM FULL is very\naggressive maintenance, which is only needed for cases of extreme\nbloat. It does lock the table against any concurrent access, since\nit is completely rewriting it.\n\nNow, if you are running UPDATE statements which affect all rows in a\ntable, you will *get* extreme bloat. You either need to do such\nupdates as a series of smaller updates with VACUUM commands in\nbetween, or schedule your aggressive maintenance for a time when it\ncan have exclusive access to the tables with minimal impact.\n\nReporting the other issues without mentioning the VACUUM FULL\nprocesses is a little bit like calling from the Titanic to mention\nthat the ship isn't going as fast as it should and neglecting to\nmention the iceberg. :-)\n\n-Kevin",
"msg_date": "Fri, 15 Jun 2012 15:12:00 -0400",
"msg_from": "Benedict Holland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "For kicks I stopped the full vacuum and the status of the remaining\nprocesses has not changed. The select count(*) is still blocked by the\nupdate.\n\n~Ben\n\n\n\nOn Fri, Jun 15, 2012 at 3:12 PM, Benedict Holland <\[email protected]> wrote:\n\n> Yes. I needed to do a full vacuum. Again, the database is very large. I\n> batch inserted quite a lot of data and then modified that data. The vacuum\n> isn't blocking anything. It was blocking other tables (as expected) but\n> continues to run and clean. My tables in general are around 10GB, each\n> update seems to nearly double the size of it so I required a full vacuum.\n> The blocked statements are the select count(*) and the alter table. Both\n> are blocked on the update table command. The alter table command SHOULD be\n> blocked and that is fine. The select count(*) should never be blocked as\n> that is the whole point of running an MVCC operation at least to my\n> understanding. I can even accept the use case that the select should block\n> with an Alter Table operation if data is retrieved from the table, but a\n> select count(*) only returns the number of rows and should be table space\n> independent. I also don't understand why a select count(*) requires an\n> AccessShareLock. I don't understand why a select should lock anything at\n> all.\n>\n> ~Ben\n>\n>\n> On Fri, Jun 15, 2012 at 3:03 PM, Kevin Grittner <\n> [email protected]> wrote:\n>\n>> Benedict Holland <[email protected]> wrote:\n>>\n>> > 10:25:08.329-04 vacuum (analyze, verbose, full)\n>> > 2096 rmv 33528 postgres 8/151\n>> > AccessExclusiveLock\n>> > Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose,\n>> > full)\n>> > 2096 rmv 50267 postgres 8/151\n>> > AccessExclusiveLock\n>> > Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose,\n>> > full)\n>>\n>> You have three VACUUM FULL commands running? VACUUM FULL is very\n>> aggressive maintenance, which is only needed for cases of extreme\n>> bloat. It does lock the table against any concurrent access, since\n>> it is completely rewriting it.\n>>\n>> Now, if you are running UPDATE statements which affect all rows in a\n>> table, you will *get* extreme bloat. You either need to do such\n>> updates as a series of smaller updates with VACUUM commands in\n>> between, or schedule your aggressive maintenance for a time when it\n>> can have exclusive access to the tables with minimal impact.\n>>\n>> Reporting the other issues without mentioning the VACUUM FULL\n>> processes is a little bit like calling from the Titanic to mention\n>> that the ship isn't going as fast as it should and neglecting to\n>> mention the iceberg. :-)\n>>\n>> -Kevin\n>>\n>\n>\n\nFor kicks I stopped the full vacuum and the status of the remaining processes has not changed. The select count(*) is still blocked by the update. ~BenOn Fri, Jun 15, 2012 at 3:12 PM, Benedict Holland <[email protected]> wrote:\nYes. I needed to do a full vacuum. Again, the database is very large. I batch inserted quite a lot of data and then modified that data. The vacuum isn't blocking anything. It was blocking other tables (as expected) but continues to run and clean. My tables in general are around 10GB, each update seems to nearly double the size of it so I required a full vacuum. The blocked statements are the select count(*) and the alter table. Both are blocked on the update table command. The alter table command SHOULD be blocked and that is fine. The select count(*) should never be blocked as that is the whole point of running an MVCC operation at least to my understanding. I can even accept the use case that the select should block with an Alter Table operation if data is retrieved from the table, but a select count(*) only returns the number of rows and should be table space independent. I also don't understand why a select count(*) requires an AccessShareLock. I don't understand why a select should lock anything at all. \n~BenOn Fri, Jun 15, 2012 at 3:03 PM, Kevin Grittner <[email protected]> wrote:\nBenedict Holland <[email protected]> wrote:\n\n> 10:25:08.329-04 vacuum (analyze, verbose, full)\n> 2096 rmv 33528 postgres 8/151\n> AccessExclusiveLock\n> Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose,\n> full)\n> 2096 rmv 50267 postgres 8/151\n> AccessExclusiveLock\n> Yes 2012-06-15 10:25:08.329-04 vacuum (analyze, verbose,\n> full)\n\nYou have three VACUUM FULL commands running? VACUUM FULL is very\naggressive maintenance, which is only needed for cases of extreme\nbloat. It does lock the table against any concurrent access, since\nit is completely rewriting it.\n\nNow, if you are running UPDATE statements which affect all rows in a\ntable, you will *get* extreme bloat. You either need to do such\nupdates as a series of smaller updates with VACUUM commands in\nbetween, or schedule your aggressive maintenance for a time when it\ncan have exclusive access to the tables with minimal impact.\n\nReporting the other issues without mentioning the VACUUM FULL\nprocesses is a little bit like calling from the Titanic to mention\nthat the ship isn't going as fast as it should and neglecting to\nmention the iceberg. :-)\n\n-Kevin",
"msg_date": "Fri, 15 Jun 2012 15:27:14 -0400",
"msg_from": "Benedict Holland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Benedict Holland <[email protected]> wrote:\n\n> Yes. I needed to do a full vacuum. Again, the database is very\n> large. I batch inserted quite a lot of data and then modified that\n> data. The vacuum isn't blocking anything. It was blocking other\n> tables (as expected) but continues to run and clean. My tables in\n> general are around 10GB, each update seems to nearly double the\n> size of it so I required a full vacuum.\n \nI was trying to suggest techniques which would prevent that bloat\nand make the VACUUM FULL unnecessary. But, now that I've had a\nchance to format the attachment into a readable format, I agree that\nit isn't part of the problem. The iceberg in this case is the ALTER\nTABLE, which is competing with two other queries.\n \n> The blocked statements are the select count(*) and the alter\n> table.\n \nOK.\n \n> Both are blocked on the update table command.\n \nNot directly. The lock held by the UPDATE would *not* block the\nSELECT; but it *does* block the ALTER TABLE command, which can't\nshare the table while it changes the structure of the table. The\nSELECT is blocked behind the ALTER TABLE.\n \n> The alter table command SHOULD be blocked and that is fine.\n \nI'm glad we're on the same page there.\n \n> The select count(*) should never be blocked as that is the whole\n> point of running an MVCC operation at least to my understanding. I\n> can even accept the use case that the select should block with an\n> Alter Table operation if data is retrieved from the table, but a\n> select count(*) only returns the number of rows and should be\n> table space independent.\n \nIn PostgreSQL SELECT count(*) must scan the table to see which rows\nare visible to the executing database transaction. Without that, it\ncan't give a completely accurate count from a transactional\nperspective. If you can settle for a non-transactional\napproximation, select the reltuples value from the pg_class row for\nthe table.\n \n> I also don't understand why a select count(*) requires an\n> AccessShareLock. I don't understand why a select should lock\n> anything at all.\n \nSo that the table isn't dropped or truncated while the count is\nscanning the table.\n \n-Kevin\n",
"msg_date": "Fri, 15 Jun 2012 14:32:10 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "You were completely correct. I stopped the Alter Table and the select is\nnow running. Is it a bug that the blocking process reported is the finial\nprocess but really the process blocking the intermediate? If alter table\ncan block a select but the update can't, then I personally would consider\nthis a rather large bug because from, the DB perspective, the wrong\ninformation is being presented. This also means I am now very skeptical\nthat the blocking processes are correct in these sorts of situations. I\ncan't be the first person to discover this and thank you for bearing with\nme.\n\n> In PostgreSQL SELECT count(*) must scan the table to see which rows\n> are visible to the executing database transaction. Without that, it\n> can't give a completely accurate count from a transactional\n> perspective. If you can settle for a non-transactional\n> approximation, select the reltuples value from the pg_class row for\n> the table.\n\nI agree with you somewhat. I would assume that \"select count(*)\" is special\nin the sense that it is table schema independent. I would actually hope\nthat this is an order 1 operation since the total table length should be\nstored somewhere as it's a reasonably useful source of information. Because\nit is schema independent, even an alter table shouldn't block it as why\nshould it? The transaction comes in when you are adding more data to the\nend of the table so the select count(*) needs a transaction to guarantee a\nfinish only. This should not block or be blocked on anything. The where\nclause, a group by or a distinct clause etc. should block on an Alter\ntable. Is this just an edge case which is not worth looking at?\n\nThank you so much for your help.\n~Ben\n\nOn Fri, Jun 15, 2012 at 3:32 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Benedict Holland <[email protected]> wrote:\n>\n> > Yes. I needed to do a full vacuum. Again, the database is very\n> > large. I batch inserted quite a lot of data and then modified that\n> > data. The vacuum isn't blocking anything. It was blocking other\n> > tables (as expected) but continues to run and clean. My tables in\n> > general are around 10GB, each update seems to nearly double the\n> > size of it so I required a full vacuum.\n>\n> I was trying to suggest techniques which would prevent that bloat\n> and make the VACUUM FULL unnecessary. But, now that I've had a\n> chance to format the attachment into a readable format, I agree that\n> it isn't part of the problem. The iceberg in this case is the ALTER\n> TABLE, which is competing with two other queries.\n>\n> > The blocked statements are the select count(*) and the alter\n> > table.\n>\n> OK.\n>\n> > Both are blocked on the update table command.\n>\n> Not directly. The lock held by the UPDATE would *not* block the\n> SELECT; but it *does* block the ALTER TABLE command, which can't\n> share the table while it changes the structure of the table. The\n> SELECT is blocked behind the ALTER TABLE.\n>\n> > The alter table command SHOULD be blocked and that is fine.\n>\n> I'm glad we're on the same page there.\n>\n> > The select count(*) should never be blocked as that is the whole\n> > point of running an MVCC operation at least to my understanding. I\n> > can even accept the use case that the select should block with an\n> > Alter Table operation if data is retrieved from the table, but a\n> > select count(*) only returns the number of rows and should be\n> > table space independent.\n>\n> In PostgreSQL SELECT count(*) must scan the table to see which rows\n> are visible to the executing database transaction. Without that, it\n> can't give a completely accurate count from a transactional\n> perspective. If you can settle for a non-transactional\n> approximation, select the reltuples value from the pg_class row for\n> the table.\n>\n> > I also don't understand why a select count(*) requires an\n> > AccessShareLock. I don't understand why a select should lock\n> > anything at all.\n>\n> So that the table isn't dropped or truncated while the count is\n> scanning the table.\n>\n> -Kevin\n>\n\nYou were completely correct. I stopped the Alter Table and the select is now running. Is it a bug that the blocking process reported is the finial process but really the process blocking the intermediate? If alter table can block a select but the update can't, then I personally would consider this a rather large bug because from, the DB perspective, the wrong information is being presented. This also means I am now very skeptical that the blocking processes are correct in these sorts of situations. I can't be the first person to discover this and thank you for bearing with me. \n> In PostgreSQL SELECT count(*) must scan the table to see which rows> are visible to the executing database transaction. Without that, it\n> can't give a completely accurate count from a transactional\n> perspective. If you can settle for a non-transactional\n> approximation, select the reltuples value from the pg_class row for\n> the table.I agree with you somewhat. I would assume that \"select count(*)\" is special in the sense that it is table schema independent. I would actually hope that this is an order 1 operation since the total table length should be stored somewhere as it's a reasonably useful source of information. Because it is schema independent, even an alter table shouldn't block it as why should it? The transaction comes in when you are adding more data to the end of the table so the select count(*) needs a transaction to guarantee a finish only. This should not block or be blocked on anything. The where clause, a group by or a distinct clause etc. should block on an Alter table. Is this just an edge case which is not worth looking at? \nThank you so much for your help.~BenOn Fri, Jun 15, 2012 at 3:32 PM, Kevin Grittner <[email protected]> wrote:\nBenedict Holland <[email protected]> wrote:\n\n> Yes. I needed to do a full vacuum. Again, the database is very\n> large. I batch inserted quite a lot of data and then modified that\n> data. The vacuum isn't blocking anything. It was blocking other\n> tables (as expected) but continues to run and clean. My tables in\n> general are around 10GB, each update seems to nearly double the\n> size of it so I required a full vacuum.\n\nI was trying to suggest techniques which would prevent that bloat\nand make the VACUUM FULL unnecessary. But, now that I've had a\nchance to format the attachment into a readable format, I agree that\nit isn't part of the problem. The iceberg in this case is the ALTER\nTABLE, which is competing with two other queries.\n\n> The blocked statements are the select count(*) and the alter\n> table.\n\nOK.\n\n> Both are blocked on the update table command.\n\nNot directly. The lock held by the UPDATE would *not* block the\nSELECT; but it *does* block the ALTER TABLE command, which can't\nshare the table while it changes the structure of the table. The\nSELECT is blocked behind the ALTER TABLE.\n\n> The alter table command SHOULD be blocked and that is fine.\n\nI'm glad we're on the same page there.\n\n> The select count(*) should never be blocked as that is the whole\n> point of running an MVCC operation at least to my understanding. I\n> can even accept the use case that the select should block with an\n> Alter Table operation if data is retrieved from the table, but a\n> select count(*) only returns the number of rows and should be\n> table space independent.\n\nIn PostgreSQL SELECT count(*) must scan the table to see which rows\nare visible to the executing database transaction. Without that, it\ncan't give a completely accurate count from a transactional\nperspective. If you can settle for a non-transactional\napproximation, select the reltuples value from the pg_class row for\nthe table.\n\n> I also don't understand why a select count(*) requires an\n> AccessShareLock. I don't understand why a select should lock\n> anything at all.\n\nSo that the table isn't dropped or truncated while the count is\nscanning the table.\n\n-Kevin",
"msg_date": "Fri, 15 Jun 2012 15:45:48 -0400",
"msg_from": "Benedict Holland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Benedict Holland <[email protected]> wrote:\n \n> I can even accept the use case that the select should block with\n> an Alter Table operation if data is retrieved from the table, but\n> a select count(*) only returns the number of rows and should be\n> table space independent.\n \nJust as an example of why the data must be scanned for transactional\nbehavior. Open three connections to the same database. On the\nfirst, run this:\n \ncreate table t (id int not null);\ninsert into t select generate_series(1, 1000000);\nvacuum analyze t;\nbegin;\ndelete from t where id between 1 and 50000;\n \nThen, on the second, run this:\n \nbegin;\ninsert into t select generate_series(1000001, 1000600);\n \nNow, run this on each of the three connections:\n \nselect count(*) from t;\n \nYou should not get the same count on each one. Depending on your\ntransactional context, you will get 950000, 1000600, or 1000000. \nOver and over as long as the modifying transactions are open. If\nyou want a fast approximation:\n \nselect reltuples from pg_class where oid = 't'::regclass;\n reltuples \n-----------\n 1e+06\n(1 row)\n \n-Kevin\n",
"msg_date": "Fri, 15 Jun 2012 14:51:26 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "I see! Thank you very much!\n\n~Ben\n\nOn Fri, Jun 15, 2012 at 3:51 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Benedict Holland <[email protected]> wrote:\n>\n> > I can even accept the use case that the select should block with\n> > an Alter Table operation if data is retrieved from the table, but\n> > a select count(*) only returns the number of rows and should be\n> > table space independent.\n>\n> Just as an example of why the data must be scanned for transactional\n> behavior. Open three connections to the same database. On the\n> first, run this:\n>\n> create table t (id int not null);\n> insert into t select generate_series(1, 1000000);\n> vacuum analyze t;\n> begin;\n> delete from t where id between 1 and 50000;\n>\n> Then, on the second, run this:\n>\n> begin;\n> insert into t select generate_series(1000001, 1000600);\n>\n> Now, run this on each of the three connections:\n>\n> select count(*) from t;\n>\n> You should not get the same count on each one. Depending on your\n> transactional context, you will get 950000, 1000600, or 1000000.\n> Over and over as long as the modifying transactions are open. If\n> you want a fast approximation:\n>\n> select reltuples from pg_class where oid = 't'::regclass;\n> reltuples\n> -----------\n> 1e+06\n> (1 row)\n>\n> -Kevin\n>\n\nI see! Thank you very much! ~BenOn Fri, Jun 15, 2012 at 3:51 PM, Kevin Grittner <[email protected]> wrote:\nBenedict Holland <[email protected]> wrote:\n\n> I can even accept the use case that the select should block with\n> an Alter Table operation if data is retrieved from the table, but\n> a select count(*) only returns the number of rows and should be\n> table space independent.\n\nJust as an example of why the data must be scanned for transactional\nbehavior. Open three connections to the same database. On the\nfirst, run this:\n\ncreate table t (id int not null);\ninsert into t select generate_series(1, 1000000);\nvacuum analyze t;\nbegin;\ndelete from t where id between 1 and 50000;\n\nThen, on the second, run this:\n\nbegin;\ninsert into t select generate_series(1000001, 1000600);\n\nNow, run this on each of the three connections:\n\nselect count(*) from t;\n\nYou should not get the same count on each one. Depending on your\ntransactional context, you will get 950000, 1000600, or 1000000.\nOver and over as long as the modifying transactions are open. If\nyou want a fast approximation:\n\nselect reltuples from pg_class where oid = 't'::regclass;\n reltuples\n-----------\n 1e+06\n(1 row)\n\n-Kevin",
"msg_date": "Fri, 15 Jun 2012 15:58:15 -0400",
"msg_from": "Benedict Holland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Benedict Holland <[email protected]> wrote:\n \n> Is it a bug that the blocking process reported is the finial\n> process but really the process blocking the intermediate?\n \nWhat reported that? The PostgreSQL server doesn't report such\nthings directly, and I don't know pgadmin, so I don't know about\nthat tool. I wrote the recursive query on this page:\n \nhttp://wiki.postgresql.org/wiki/Lock_dependency_information\n \nSo if that reported anything incorrecly, please let me know so I can\nfix it.\n \nBy the way, the example with the three connections would have been\nbetter had I suggested a BEGIN TRANSACTION ISOLATION LEVEL\nREPEATABLE READ; on the third connection. With that, even if one or\nboth of the transactions on the other connections committed, the\nthird transaction's count should remain unchanged.\n \n-Kevin\n",
"msg_date": "Fri, 15 Jun 2012 15:00:06 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "I ran the scripts on the page and both returned empty (though I have\nqueries running and currently nothing blocks). I don't know what they\nshould have been. The output was from PgAdmin3 which is a UI for postgres.\nI assume that they get this queried information from something inside of\npostgres as I can't imagine the query tool doing something other than\nquerying the database for specs. I think it looks at the PID. This very\nwell might be a PgAdmin issue and have nothing to do with postgres.\n\n~Ben\n\nOn Fri, Jun 15, 2012 at 4:00 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Benedict Holland <[email protected]> wrote:\n>\n> > Is it a bug that the blocking process reported is the finial\n> > process but really the process blocking the intermediate?\n>\n> What reported that? The PostgreSQL server doesn't report such\n> things directly, and I don't know pgadmin, so I don't know about\n> that tool. I wrote the recursive query on this page:\n>\n> http://wiki.postgresql.org/wiki/Lock_dependency_information\n>\n> So if that reported anything incorrecly, please let me know so I can\n> fix it.\n>\n> By the way, the example with the three connections would have been\n> better had I suggested a BEGIN TRANSACTION ISOLATION LEVEL\n> REPEATABLE READ; on the third connection. With that, even if one or\n> both of the transactions on the other connections committed, the\n> third transaction's count should remain unchanged.\n>\n> -Kevin\n>\n\nI ran the scripts on the page and both returned empty (though I have queries running and currently nothing blocks). I don't know what they should have been. The output was from PgAdmin3 which is a UI for postgres. I assume that they get this queried information from something inside of postgres as I can't imagine the query tool doing something other than querying the database for specs. I think it looks at the PID. This very well might be a PgAdmin issue and have nothing to do with postgres.\n~BenOn Fri, Jun 15, 2012 at 4:00 PM, Kevin Grittner <[email protected]> wrote:\nBenedict Holland <[email protected]> wrote:\n\n> Is it a bug that the blocking process reported is the finial\n> process but really the process blocking the intermediate?\n\nWhat reported that? The PostgreSQL server doesn't report such\nthings directly, and I don't know pgadmin, so I don't know about\nthat tool. I wrote the recursive query on this page:\n\nhttp://wiki.postgresql.org/wiki/Lock_dependency_information\n\nSo if that reported anything incorrecly, please let me know so I can\nfix it.\n\nBy the way, the example with the three connections would have been\nbetter had I suggested a BEGIN TRANSACTION ISOLATION LEVEL\nREPEATABLE READ; on the third connection. With that, even if one or\nboth of the transactions on the other connections committed, the\nthird transaction's count should remain unchanged.\n\n-Kevin",
"msg_date": "Fri, 15 Jun 2012 16:29:02 -0400",
"msg_from": "Benedict Holland <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Update blocking a select count(*)?"
},
{
"msg_contents": "Benedict Holland <[email protected]> wrote:\n \n> I ran the scripts on the page and both returned empty (though I\n> have queries running and currently nothing blocks). I don't know\n> what they should have been.\n \nIt only shows information on blocking, so the list should be empty\nwhen there is none. :-) If it works as intended, it would have\nshown the chain of blocking, from the update to the alter to the\nselect.\n \n> The output was from PgAdmin3 which is a UI for postgres. I assume\n> that they get this queried information from something inside of\n> postgres as I can't imagine the query tool doing something other\n> than querying the database for specs.\n \nThey are probably doing something internally which is somewhat\nsimilar to the recursive query on that page. It sounds like when\nthere is a chain or tree of blocking, they show the process at the\nfront of the parade, rather than the immediate blocker. I can't say\nthat's right or wrong, but it should be documented so that people\ncan understand what they're looking at. Even better would be to\nmake a nice graphical tree of the blocking, but that would be\ngetting pretty fancy. :-)\n \n-Kevin\n",
"msg_date": "Fri, 15 Jun 2012 15:38:39 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Update blocking a select count(*)?"
}
] |
[
{
"msg_contents": "Hello.\n\nToday I've found a query that I thought will be fast turned out to be slow.\nThe problem is correlated exists with join - it does not want to make\ncorrelated nested loop to make exists check.\nEven if I force it to use nested loop, it materialized join uncorrelated\nand then filters it. It's OK when exists does not have join. Also good old\nleft join where X=null works fast.\nNote that I could see same problem for both exists and not exists.\nBelow is test case (tested on 9.1.4) with explains.\n\n\ncreate temporary table o(o_val,c_val) as select v, v/2 from\ngenerate_series(1,1000000) v;\ncreate temporary table i(o_ref, l_ref) as select\ngenerate_series(1,1000000), generate_series(1,10);\ncreate temporary table l(l_val, l_name) as select v, 'n_' || v from\ngenerate_series(1,10) v;\ncreate index o_1 on o(o_val);\ncreate index o_2 on o(c_val);\ncreate index i_1 on i(o_ref);\ncreate index i_2 on i(l_ref);\ncreate index l_1 on l(l_val);\ncreate index l_2 on l(l_name);\nanalyze o;\nanalyze i;\nanalyze l;\nexplain analyze select 1 from o where not exists (select 1 from i join l on\nl_ref = l_val where l_name='n_2' and o_ref=o_val) and c_val=33;\n-- http://explain.depesz.com/s/Rvw\nexplain analyze select 1 from o where not exists (select 1 from i join l on\nl_ref = l_val where l_val=2 and o_ref=o_val) and c_val=33;\n-- http://explain.depesz.com/s/fVHw\nexplain analyze select 1 from o where not exists (select 1 from i where\nl_ref=2 and o_ref=o_val) and c_val=33;\n-- http://explain.depesz.com/s/HgN\nexplain analyze select 1 from o left join i on o_ref=o_val left join l on\nl_ref = l_val and l_name='n_2' where o_ref is null and c_val=33;\n-- http://explain.depesz.com/s/mLA\nset enable_hashjoin=false;\nexplain analyze select 1 from o where not exists (select 1 from i join l on\nl_ref = l_val where l_name='n_2' and o_ref=o_val) and c_val=33;\n-- http://explain.depesz.com/s/LYu\nrollback;\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nHello.Today I've found a query that I thought will be fast turned out to be slow. The problem is correlated exists with join - it does not want to make correlated nested loop to make exists check.\nEven if I force it to use nested loop, it materialized join uncorrelated and then filters it. It's OK when exists does not have join. Also good old left join where X=null works fast.Note that I could see same problem for both exists and not exists.\nBelow is test case (tested on 9.1.4) with explains.create temporary table o(o_val,c_val) as select v, v/2 from generate_series(1,1000000) v;create temporary table i(o_ref, l_ref) as select generate_series(1,1000000), generate_series(1,10);\ncreate temporary table l(l_val, l_name) as select v, 'n_' || v from generate_series(1,10) v;create index o_1 on o(o_val);create index o_2 on o(c_val);create index i_1 on i(o_ref);\ncreate index i_2 on i(l_ref);create index l_1 on l(l_val);create index l_2 on l(l_name);analyze o;analyze i;analyze l;explain analyze select 1 from o where not exists (select 1 from i join l on l_ref = l_val where l_name='n_2' and o_ref=o_val) and c_val=33;\n-- http://explain.depesz.com/s/Rvwexplain analyze select 1 from o where not exists (select 1 from i join l on l_ref = l_val where l_val=2 and o_ref=o_val) and c_val=33;\n-- http://explain.depesz.com/s/fVHwexplain analyze select 1 from o where not exists (select 1 from i where l_ref=2 and o_ref=o_val) and c_val=33;-- http://explain.depesz.com/s/HgN\nexplain analyze select 1 from o left join i on o_ref=o_val left join l on l_ref = l_val and l_name='n_2' where o_ref is null and c_val=33;-- http://explain.depesz.com/s/mLA\nset enable_hashjoin=false;explain analyze select 1 from o where not exists (select 1 from i join l on l_ref = l_val where l_name='n_2' and o_ref=o_val) and c_val=33;-- http://explain.depesz.com/s/LYu\nrollback;-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Mon, 18 Jun 2012 16:47:42 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "correlated exists with join is slow."
},
{
"msg_contents": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> Today I've found a query that I thought will be fast turned out to be slow.\n> The problem is correlated exists with join - it does not want to make\n> correlated nested loop to make exists check.\n\n9.2 will make this all better. These are exactly the type of case where\nyou need the \"parameterized path\" stuff.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Jun 2012 10:52:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: correlated exists with join is slow."
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n \n> 9.2 will make this all better. These are exactly the type of case\n> where you need the \"parameterized path\" stuff.\n \nYeah, with HEAD on my workstation all of these queries run in less\nthan 0.1 ms. On older versions, I'm seeing times like 100 ms to 150\nms for the slow cases. So in this case, parameterized paths allow\nan improvement of more than three orders of magnitude. :-)\n \n-Kevin\n",
"msg_date": "Mon, 18 Jun 2012 09:58:24 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: correlated exists with join is slow."
},
{
"msg_contents": "Glad to hear postgresql becomes better and better :)\n\n2012/6/18 Tom Lane <[email protected]>\n\n> =?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> > Today I've found a query that I thought will be fast turned out to be\n> slow.\n> > The problem is correlated exists with join - it does not want to make\n> > correlated nested loop to make exists check.\n>\n> 9.2 will make this all better. These are exactly the type of case where\n> you need the \"parameterized path\" stuff.\n>\n> regards, tom lane\n>\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nGlad to hear postgresql becomes better and better :)2012/6/18 Tom Lane <[email protected]>\n=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> Today I've found a query that I thought will be fast turned out to be slow.\n> The problem is correlated exists with join - it does not want to make\n> correlated nested loop to make exists check.\n\n9.2 will make this all better. These are exactly the type of case where\nyou need the \"parameterized path\" stuff.\n\n regards, tom lane\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Mon, 18 Jun 2012 18:02:32 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: correlated exists with join is slow."
}
] |
[
{
"msg_contents": "I am running the following query:\n\nSELECT res1.x, res1.y, res1.z\nFROM test t\nJOIN residue_atom_coords res1 ON\n\t\tt.struct_id_1 = res1.struct_id AND\n\t\tres1.atomno IN (1,2,3,4) AND \n\t\t(res1.seqpos BETWEEN t.pair_1_helix_1_begin AND t.pair_1_helix_1_end)\nWHERE\nt.compare_id BETWEEN 1 AND 10000;\n\nThe 'test' table is very large (~270 million rows) as is the residue_atom_coords table (~540 million rows).\n\nThe number of compare_ids I select in the 'WHERE' clause determines the join type in the following way:\n\nt.compare_id BETWEEN 1 AND 5000;\n\n Nested Loop (cost=766.52..15996963.12 rows=3316307 width=24)\n -> Index Scan using test_pkey on test t (cost=0.00..317.20 rows=5372 width=24)\n Index Cond: ((compare_id >= 1) AND (compare_id <= 5000))\n -> Bitmap Heap Scan on residue_atom_coords res1 (cost=766.52..2966.84 rows=625 width=44)\n Recheck Cond: ((struct_id = t.struct_id_1) AND (seqpos >= t.pair_1_helix_1_begin) AND (seqpos <= t.pair_1_helix_1_end) AND (atomno = ANY ('{1,2,3,4}'::integer[])))\n -> Bitmap Index Scan on residue_atom_coords_pkey (cost=0.00..766.36 rows=625 width=0)\n Index Cond: ((struct_id = t.struct_id_1) AND (seqpos >= t.pair_1_helix_1_begin) AND (seqpos <= t.pair_1_helix_1_end) AND (atomno = ANY ('{1,2,3,4}'::integer[])))\n\nt.compare_id BETWEEN 1 AND 10000;\n\n Hash Join (cost=16024139.91..20940899.94 rows=6633849 width=24)\n Hash Cond: (t.struct_id_1 = res1.struct_id)\n Join Filter: ((res1.seqpos >= t.pair_1_helix_1_begin) AND (res1.seqpos <= t.pair_1_helix_1_end))\n -> Index Scan using test_pkey on test t (cost=0.00..603.68 rows=10746 width=24)\n Index Cond: ((compare_id >= 1) AND (compare_id <= 10000))\n -> Hash (cost=13357564.16..13357564.16 rows=125255660 width=44)\n -> Seq Scan on residue_atom_coords res1 (cost=0.00..13357564.16 rows=125255660 width=44)\n Filter: (atomno = ANY ('{1,2,3,4}'::integer[]))\n\nThe nested loop join performs very quickly, whereas the hash join is incredibly slow. If I disable the hash join temporarily then a nested loop join is used in the second case and is the query runs much more quickly. How can I change my configuration to favor the nested join in this case? Is this a bad idea? Alternatively, since I will be doing selections like this many times, what indexes can be put in place to expedite the query without mucking with the query optimizer? I've already created an index on the struct_id field of residue_atom_coords (each unique struct_id should only have a small number of rows for the residue_atom_coords table).\n\nThanks in advance,\nTim\n\n\n",
"msg_date": "Tue, 19 Jun 2012 17:34:56 -0400",
"msg_from": "Tim Jacobs <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is a hash join being used?"
},
{
"msg_contents": "On Wed, Jun 20, 2012 at 1:34 AM, Tim Jacobs <[email protected]> wrote:\n> The nested loop join performs very quickly, whereas the hash join is incredibly slow. If I disable the hash join temporarily then a nested loop join is used in the second case and is the query runs much more quickly. How can I change my configuration to favor the nested join in this case? Is this a bad idea?\n\nFirst do ANALYZE the tables and try the tests again.\n\nIf it helped check your autovacuum configuration. Look at\nhttp://www.postgresql.org/docs/9.1/static/routine-vacuuming.html#AUTOVACUUM\nand the pg_stat_user_tables table (last_* and *_count fields).\n\nIf it still produces wrong plan then try to increase statistics\nentries by ALTER TABLE SET STATISTICS (do not forget to ANALYZE after\ndoing it) or by the default_statistics_target configuration parameter.\nRead more about it here\nhttp://www.postgresql.org/docs/9.1/static/planner-stats.html.\n\n> Alternatively, since I will be doing selections like this many times, what indexes can be put in place to expedite the query without mucking with the query optimizer? I've already created an index on the struct_id field of residue_atom_coords (each unique struct_id should only have a small number of rows for the residue_atom_coords table).\n\nAs I can see everything is okay with indexes.\n\n>\n> Thanks in advance,\n> Tim\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSergey Konoplev\n\na database architect, software developer at PostgreSQL-Consulting.com\nhttp://www.postgresql-consulting.com\n\nJabber: [email protected] Skype: gray-hemp Phone: +79160686204\n",
"msg_date": "Wed, 20 Jun 2012 17:36:20 +0400",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is a hash join being used?"
},
{
"msg_contents": "Tim Jacobs <[email protected]> wrote:\n \n> The nested loop join performs very quickly, whereas the hash join\n> is incredibly slow. If I disable the hash join temporarily then a\n> nested loop join is used in the second case and is the query runs\n> much more quickly. How can I change my configuration to favor the\n> nested join in this case? Is this a bad idea?\n \nBefore anyone can make solid suggestions on what you might want to\nchange in your configuration, they would need to know more. Please\nread this page:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n... and repost with your PostgreSQL version, your configuration\noverrides, a description of your hardware, and EXPLAIN ANALYZE\noutput from the query (rather than just EXPLAIN output).\n \nYou might not be modeling your costs correctly, you might not be\nallocating resources well, you might be on an old version without an\noptimizer as smart as more recent versions, your statistics might be\nout of date, or you might be running into an optimizer weakness of\nsome sort.\n \n-Kevin\n",
"msg_date": "Fri, 22 Jun 2012 10:59:44 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is a hash join being used?"
}
] |
[
{
"msg_contents": "Hi all,\n\nas far as i looked around about the new feature: index-only scan, i guess\nthis feature will not include the option such as ms-sql INCLUDE.\n\nwell, i have a table with columns: a,b,c\ni query the table like this: select a,c from table where a=x and b=y\nas for now, i have unique-index on (a,b)\n\nin the future (when upgrading to 9.2), i would like to have unique-index on\n(a,b, INCLUDE c). but that wont be possible (right?).\n\nso... in order to have index-only scan, i will have to create an index like\n(a,b,c), but this has problems:\n1. i lose the uniqueness enforcement of (a,b), unless, i will create 2\nindexes: (a,b) and (a,b,c).\n2. every update to column c would result in an unnecessary index-key-update\n(or what ever you call that operation), which is not just updating a tuple,\nbut also an attempt to re-ordering it(!).\n3. i just wonder: practically there is uniqueness of (a,b). now, if i\ncreate index like (a,b,c) the optimizer dose not know about the uniqueness\nof (a,b), therefore i afraid, it may not pick the best query-plan..\n\nThanks for any comment.\n\nHi all,\nas far as i looked around about the new feature: index-only scan, i guess this feature will not include the option such as ms-sql INCLUDE.well, i have a table with columns: a,b,c\n\n\ni query the table like this: select a,c from table where a=x and b=y\nas for now, i have unique-index on (a,b)in the future (when upgrading to 9.2), i would like to have unique-index on (a,b, INCLUDE c). but that wont be possible (right?). \nso... in order to have index-only scan, i will have to create an index like (a,b,c), but this has problems: 1. i lose the uniqueness enforcement of (a,b), unless, i will create 2 indexes: (a,b) and (a,b,c).\n2. every update to column c would result in an unnecessary index-key-update (or what ever you call that operation), which is not just updating a tuple, but also an attempt to re-ordering it(!).3. i just wonder: practically there is uniqueness of (a,b). now, if i create index like (a,b,c) the optimizer dose not know about the uniqueness of (a,b), therefore i afraid, it may not pick the best query-plan..\nThanks for any comment.",
"msg_date": "Wed, 20 Jun 2012 07:46:48 +0300",
"msg_from": "Eyal Wilde <[email protected]>",
"msg_from_op": true,
"msg_subject": "index-only scan is missing the INCLUDE feature"
},
{
"msg_contents": "On 06/20/2012 12:46 PM, Eyal Wilde wrote:\n> Hi all,\n>\n> as far as i looked around about the new feature: index-only scan, i \n> guess this feature will not include the option such as ms-sql INCLUDE.\n>\nFor those of us who don't know MS-SQL, can you give a quick explanation \nof what the INCLUDE keyword in an index definition is expected to do, or \nsome documentation references? It's possible to guess it somewhat from \nyour description, but it's helpful to be specific when asking a question \nabout features from another DBMS.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 06/20/2012 12:46 PM, Eyal Wilde\n wrote:\n\n\n\nHi all,\n \nas far as i looked around about the new feature:\n index-only scan, i guess this feature will not include the\n option such as ms-sql INCLUDE.\n\n\n\n\n\n For those of us who don't know MS-SQL, can you give a quick\n explanation of what the INCLUDE keyword in an index definition is\n expected to do, or some documentation references? It's possible to\n guess it somewhat from your description, but it's helpful to be\n specific when asking a question about features from another DBMS.\n\n --\n Craig Ringer",
"msg_date": "Wed, 20 Jun 2012 22:11:05 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index-only scan is missing the INCLUDE feature"
},
{
"msg_contents": "On 06/20/2012 09:11 AM, Craig Ringer wrote:\n\n> For those of us who don't know MS-SQL, can you give a quick\n> explanation of what the INCLUDE keyword in an index definition is\n> expected to do, or some documentation references?\n\nHe's talking about what MS SQL Server commonly calls a \"covering index.\" \nIn these cases, you can specify columns to be included in the index, but \nnot actually part of the calculated hash. This prevents a trip to the \ntable data, so selects can be serviced entirely by an index scan.\n\nPostgreSQL is about half way there by allowing index-only scans, though \nI've no idea if they intend on adding further functionality like this. \nEffectively you can trade index bloat for query speed. But considering \nthe differences between the engines, it might not be necessary. I \ncouldn't say.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Wed, 20 Jun 2012 10:32:19 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index-only scan is missing the INCLUDE feature"
},
{
"msg_contents": "On 06/20/2012 11:32 PM, Shaun Thomas wrote:\n> On 06/20/2012 09:11 AM, Craig Ringer wrote:\n>\n>> For those of us who don't know MS-SQL, can you give a quick\n>> explanation of what the INCLUDE keyword in an index definition is\n>> expected to do, or some documentation references?\n>\n> He's talking about what MS SQL Server commonly calls a \"covering \n> index.\" In these cases, you can specify columns to be included in the \n> index, but not actually part of the calculated hash. This prevents a \n> trip to the table data, so selects can be serviced entirely by an \n> index scan.\n\nOh, OK, so it's a covering index with added fields that don't form part \nof the searchable index structure to make the index a little less \nexpensive than a fully covering index on all the columns of interest. \nFair enough. Thanks for the explanation.\n\nEyal, you'll get a better response to questions about other DBMSs if you \nexplain what you need/want to do with the desired feature and what that \nfeature does in the other DBMS.\n>\n> PostgreSQL is about half way there by allowing index-only scans, \n> though I've no idea if they intend on adding further functionality \n> like this.\n\nThere's certainly lots of interest in adding more, but not that many \npeople with the expertise to be able to do it - and fewer still who're \npaid to work on Pg so they have time to focus on it. Covering indexes \nwith Pg's MVCC model seem to be particularly challenging, too.\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Thu, 21 Jun 2012 10:45:41 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index-only scan is missing the INCLUDE feature"
},
{
"msg_contents": "Le jeudi 21 juin 2012 04:45:41, Craig Ringer a écrit :\n> On 06/20/2012 11:32 PM, Shaun Thomas wrote:\n> > On 06/20/2012 09:11 AM, Craig Ringer wrote:\n> >> For those of us who don't know MS-SQL, can you give a quick\n> >> explanation of what the INCLUDE keyword in an index definition is\n> >> expected to do, or some documentation references?\n> > \n> > He's talking about what MS SQL Server commonly calls a \"covering\n> > index.\" In these cases, you can specify columns to be included in the\n> > index, but not actually part of the calculated hash. This prevents a\n> > trip to the table data, so selects can be serviced entirely by an\n> > index scan.\n> \n> Oh, OK, so it's a covering index with added fields that don't form part\n> of the searchable index structure to make the index a little less\n> expensive than a fully covering index on all the columns of interest.\n> Fair enough. Thanks for the explanation.\n> \n> Eyal, you'll get a better response to questions about other DBMSs if you\n> explain what you need/want to do with the desired feature and what that\n> feature does in the other DBMS.\n> \n> > PostgreSQL is about half way there by allowing index-only scans,\n> > though I've no idea if they intend on adding further functionality\n> > like this.\n> \n> There's certainly lots of interest in adding more, but not that many\n> people with the expertise to be able to do it - and fewer still who're\n> paid to work on Pg so they have time to focus on it. Covering indexes\n> with Pg's MVCC model seem to be particularly challenging, too.\n\nThere was a recent thread on -hackers about index with UNIQUEness of some \ncolumns only. The objective was near the one you describe here.\nSo you're not alone looking after that.\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation",
"msg_date": "Mon, 25 Jun 2012 17:22:00 +0200",
"msg_from": "=?utf-8?q?C=C3=A9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index-only scan is missing the INCLUDE feature"
}
] |
[
{
"msg_contents": "Hi, all.\n\nthis is an obligation from the past:\nhttp://archives.postgresql.org/pgsql-performance/2012-05/msg00017.php\n\nthe same test, that did ~230 results, is now doing ~700 results. that is,\nBTW even better than mssql.\n\nthe ultimate solution for that problem was to NOT to do \"ON COMMIT DELETE\nROWS\" for the temporary tables. instead, we just do \"DELETE FROM\ntemp_table1\".\n\ndoing \"TRUNCATE temp_table1\" is defiantly the worst case (~100 results in\nthe same test). this is something we knew for a long time, which is why we\ndid \"ON COMMIT DELETE ROWS\", but eventually it turned out as far from being\nthe best.\n\nanother minor issue is that when configuring\n temp_tablespace='other_tablespace', the sequences of the temporary tables\nremain on the 'main_tablespace'.\n\ni hope that will help making postgres even better :)\n\nHi, all.this is an obligation from the past:http://archives.postgresql.org/pgsql-performance/2012-05/msg00017.php\nthe same test, that did ~230 results, is now doing ~700 results. that is, BTW even better than mssql.the ultimate solution for that problem was to NOT to do \"ON COMMIT DELETE ROWS\" for the temporary tables. instead, we just do \"DELETE FROM temp_table1\".\ndoing \"TRUNCATE temp_table1\" is defiantly the worst case (~100 results in the same test). this is something we knew for a long time, which is why we did \"ON COMMIT DELETE ROWS\", but eventually it turned out as far from being the best.\nanother minor issue is that when configuring temp_tablespace='other_tablespace', the sequences of the temporary tables remain on the 'main_tablespace'. i hope that will help making postgres even better :)",
"msg_date": "Wed, 20 Jun 2012 09:01:13 +0300",
"msg_from": "Eyal Wilde <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On 6/20/2012 1:01 AM, Eyal Wilde wrote:\n> Hi, all.\n>\n> this is an obligation from the past:\n> http://archives.postgresql.org/pgsql-performance/2012-05/msg00017.php\n>\n> the same test, that did ~230 results, is now doing ~700 results. that\n> is, BTW even better than mssql.\n>\n> the ultimate solution for that problem was to NOT to do \"ON COMMIT\n> DELETE ROWS\" for the temporary tables. instead, we just do \"DELETE FROM\n> temp_table1\".\n>\n> doing \"TRUNCATE temp_table1\" is defiantly the worst case (~100 results\n> in the same test). this is something we knew for a long time, which is\n> why we did \"ON COMMIT DELETE ROWS\", but eventually it turned out as far\n> from being the best.\n>\n> another minor issue is that when configuring\n> temp_tablespace='other_tablespace', the sequences of the temporary\n> tables remain on the 'main_tablespace'.\n>\n> i hope that will help making postgres even better :)\n>\n\nDid you ever try re-writing some of the temp table usage to use \nsubselect's/views/cte/etc?\n\n-Andy\n\n",
"msg_date": "Wed, 20 Jun 2012 08:43:01 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
},
{
"msg_contents": "On Wed, Jun 20, 2012 at 8:43 AM, Andy Colson <[email protected]> wrote:\n>> this is an obligation from the past:\n>> http://archives.postgresql.org/pgsql-performance/2012-05/msg00017.php\n>>\n>> the same test, that did ~230 results, is now doing ~700 results. that\n>> is, BTW even better than mssql.\n>>\n>> the ultimate solution for that problem was to NOT to do \"ON COMMIT\n>> DELETE ROWS\" for the temporary tables. instead, we just do \"DELETE FROM\n>> temp_table1\".\n>>\n>> doing \"TRUNCATE temp_table1\" is defiantly the worst case (~100 results\n>> in the same test). this is something we knew for a long time, which is\n>> why we did \"ON COMMIT DELETE ROWS\", but eventually it turned out as far\n>> from being the best.\n>>\n>> another minor issue is that when configuring\n>> temp_tablespace='other_tablespace', the sequences of the temporary\n>> tables remain on the 'main_tablespace'.\n>>\n>> i hope that will help making postgres even better :)\n>>\n>\n> Did you ever try re-writing some of the temp table usage to use\n> subselect's/views/cte/etc?\n\nYeah -- especially CTE. But, assuming you really do need to keep a\ntemp table organized and you want absolutely minimum latency in the\ntemp table manipulating function, you can use a nifty trick so\norganize a table around txid_current();\n\nCREATE UNLOGGED TABLE Cache (txid BIGINT DEFAULT txid_current(), a\nTEXT, b TEXT);\nCREATE INDEX ON Cache(txid);\n-- or --\nCREATE INDEX ON Cache(txid, a); -- if a is lookup key etc.\n\nWhen you insert to the table let the default catch the current txid\nand make sure that all queries are properly filtering the table on\ntxid, and that all indexes are left prefixed on txid.\n\nWhy do this? Now the record delete operations can be delegated to an\nexternal process. At any time, a scheduled process can do:\nDELETE from Cache;\n\nThis is not guaranteed to be faster, but it probably will be.\n\nmerlin\n",
"msg_date": "Wed, 20 Jun 2012 09:55:52 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: scale up (postgresql vs mssql)"
}
] |
[
{
"msg_contents": "Hi,\nWe started to think about using SSD drive for our telco system DB. Because we have many \"almost\" read-only data I think SSD is good candidate for our task. We would like to speed up process of read operation. \nI've read post (http://blog.2ndquadrant.com/intel_ssd_now_off_the_sherr_sh/) about SSD which have write safe functionality and two drive are recommended Intel 320 and Vertex2 Pro. Both drive are rather inexpensive but both using old SATA II.\nI tried to find newer faster version of Vertex because production Vertex 2 Pro is stopped but there is no information about new drives that has similar functionality and are cheap. Do you recommend cheap SSD drives that are suitable for DB needs?\n\nRegards\nMichal Szymanski\n",
"msg_date": "Wed, 20 Jun 2012 07:51:08 -0700 (PDT)",
"msg_from": "Michal Szymanski <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSD, Postgres and safe write cache"
},
{
"msg_contents": "On 2012-06-20 16:51, Michal Szymanski wrote:\n> Hi,\n> We started to think about using SSD drive for our telco system DB. Because we have many \"almost\" read-only data I think SSD is good candidate for our task. We would like to speed up process of read operation.\n> I've read post (http://blog.2ndquadrant.com/intel_ssd_now_off_the_sherr_sh/) about SSD which have write safe functionality and two drive are recommended Intel 320 and Vertex2 Pro. Both drive are rather inexpensive but both using old SATA II.\n> I tried to find newer faster version of Vertex because production Vertex 2 Pro is stopped but there is no information about new drives that has similar functionality and are cheap. Do you recommend cheap SSD drives that are suitable for DB needs?\nWe were able to get OCZ Deneva 2's \n(http://www.oczenterprise.com/downloads/solutions/ocz-deneva2-r-mlc-2.5in_Product_Brief.pdf) \nfrom our supplier, which were suggested by OCZ as replacement for the \nvertex 2 pro's. They're marketed as safe under power failure and our \ntests with the diskchecker tool confirmed that.\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Mon, 25 Jun 2012 18:05:12 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD, Postgres and safe write cache"
},
{
"msg_contents": "On 06/20/2012 10:51 AM, Michal Szymanski wrote:\n> We started to think about using SSD drive for our telco system DB.\n > Because we have many \"almost\" read-only data I think SSD is good \ncandidate\n> for our task. We would like to speed up process of read operation.\n\nMany read-only operations can be made as fast as possible just by \ngetting more RAM. SSD is only significantly faster than regular disk on \nreads if the working set of data is bigger than you can fit in memory, \nbut you can fit it all on SSD. That's not as many workloads as you \nmight guess.\n\n> I've read post (http://blog.2ndquadrant.com/intel_ssd_now_off_the_sherr_sh/) about\n > SSD which have write safe functionality and two drive are recommended\n > Intel 320 and Vertex2 Pro. Both drive are rather inexpensive but both \nusing old SATA II.\n\nIntel's 710 model is their more expensive one, but that's mainly due to \nlonger expected lifetime than speed: \nhttp://blog.2ndquadrant.com/intel_ssds_lifetime_and_the_32/\n\nI don't see a lot of need for a faster interface than SATA II on \ndatabase SSD yet. If you need the data really fast, it has to be in \nRAM. And if it's so large that you can't fit it in RAM, you're likely \nlooking at random I/O against the SSD--where most are hard pressed to \nsaturate even a SATA II bus. Indexes for example can really benefit \nfrom SSD instead of regular drives, but that's almost always random \naccess when you're in that situation.\n\nThere's not a lot of systems that are inside the narrow case where SATA \nII SSD isn't fast enough, but similar performance per dollar SATA III \nSSD is. Some of the PCI-E flash-based cards, like FusionIO's, can do a \nlot better than SATA II. But they tend to use more flash in parallel \ntoo, it's hard to get that much throughput out of most flash devices; \nit's not just that they transfer to the host faster.\n\nI'd build a prototype with whatever drives you have access to and try to \nmeasure what you need here. I hate to see people jump right toward \nleading edge SSD only to discover their real working set fits in memory \njust fine, so it doesn't even matter. Or that the bottleneck is \nsomewhere else entirely.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n",
"msg_date": "Sun, 01 Jul 2012 00:37:28 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD, Postgres and safe write cache"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am currently playing with the nice pgbench tool.\nI would like to build a benchmark using pgbench with customized scenarios, \nin order to get something quite representative of a real workload.\nI have designed a few tables, with a simple script to populate them, and \ndefined 3 scenarios representing typical transactions.\n\nBut I have the following issue. Some tables have CHAR (or TEXT or VARCHAR) \ncolumns that belong to their primary key and I want to include into \npgbench scenarios statements with conditions on these CHAR columns, using \nsome random values generated by pgbench.\nAs pgbench \\set, \\setrandom or \\setshell meta-commands only manage integer \nvariables, I tried to use SQL conditions like:\n... where my_column = to_char(:var::integer,'00009FM') ...\nwith var previously defined by:\n\\setrandom var 1 :maxvar\n\nHaving previously loaded my_column with digits strings, I get the right \nresult. But ... this condition cannot use any index (tested on 9.2beta2). \nAs a result, I get looooooong index or table scans, which is of course not \nacceptable in my benchmark as it is not representative of the real data \naccess path :-((\n\nDoes someone has a trick to manage random char or text variable in pgbench \n?\n\nThanks by advance for any help.\nBest regards.\nPhilippe Beaudoin.\nHi all,\n\nI am currently playing with the nice\npgbench tool.\nI would like to build a benchmark using\npgbench with customized scenarios, in order to get something quite representative\nof a real workload.\nI have designed a few tables, with a\nsimple script to populate them, and defined 3 scenarios representing typical\ntransactions.\n\nBut I have the following issue. Some\ntables have CHAR (or TEXT or VARCHAR) columns that belong to their primary\nkey and I want to include into pgbench scenarios statements with conditions\non these CHAR columns, using some random values generated by pgbench.\nAs pgbench \\set, \\setrandom or \\setshell\nmeta-commands only manage integer variables, I tried to use SQL conditions\nlike:\n... where my_column = to_char(:var::integer,'00009FM')\n...\nwith var previously defined by:\n\\setrandom var 1 :maxvar\n\nHaving previously loaded my_column with\ndigits strings, I get the right result. But ... this condition cannot use\nany index (tested on 9.2beta2). As a result, I get looooooong index or\ntable scans, which is of course not acceptable in my benchmark as it is\nnot representative of the real data access path :-((\n\nDoes someone has a trick to manage random\nchar or text variable in pgbench ?\n\nThanks by advance for any help.\nBest regards.\nPhilippe Beaudoin.",
"msg_date": "Wed, 20 Jun 2012 21:51:38 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "random char or text variable in pgbench"
}
] |
[
{
"msg_contents": "I need to move a postgres 9.0 database -- with tables, indexes, and wals associated with 16 tablespaces on 12 logical drives -- to an existing raid 10 drive in another volume on the same server. Once I get the data off the initial 12 drives they will be reconfigured, at which point I'll need to move everything from the 2nd volume to the aforementioned 12 logical drives on the first volume. This is being done both to free up the 2nd volume and to better utilize raid 10.\n\nI checked around and found a way to create sql statements to alter the public tablespaces and indexes, but I haven't found anything that provides information about moving the numerous associated config files, log files, etc. \n\nANY comments, suggestions, or direction to existing documentation would be greatly appreciated. \n\nCurrent server info:\n\n- 4 dual-core AMD Opteron 2212 processors, 2010.485 MHz\n- 64GB RAM\n- 16 67GB RAID 1 drives and 1 464GB RAID 10 drive (all ext3) on 2 volumes.\n- Linux 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n\n\nThank you,\nMidge\n\n\n\n\n\n\n\n\n\n\nI need to move a postgres 9.0 database \n-- with tables, indexes, and wals associated with 16 tablespaces on 12 \nlogical drives -- to an existing raid 10 drive in another volume on the \nsame server. Once I get the data off the initial 12 drives they will be \nreconfigured, at which point I'll need to move everything from the 2nd volume to \nthe aforementioned 12 logical drives on the first volume. This is being done \nboth to free up the 2nd volume and to better utilize raid 10.\n \nI checked around and found a way to create \nsql statements to alter the public tablespaces and indexes, but I haven't found \nanything that provides information about moving the numerous associated config \nfiles, log files, etc. \n \n\nANY comments, suggestions, or direction to \nexisting documentation would be greatly appreciated. \n \nCurrent server info:\n \n- 4 dual-core AMD Opteron 2212 processors, \n2010.485 MHz\n- 64GB RAM\n- 16 67GB RAID 1 drives and 1 464GB RAID 10 \ndrive (all ext3) on 2 volumes.\n- Linux 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 \nEDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n \n \nThank you,\nMidge",
"msg_date": "Wed, 20 Jun 2012 15:27:25 -0700",
"msg_from": "\"Midge Brown\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "moving tables"
},
{
"msg_contents": "On 6/20/12 3:27 PM, Midge Brown wrote:\n> I need to move a postgres 9.0 database -- with tables, indexes, and wals associated with 16 tablespaces on 12 logical drives -- to an existing raid 10 drive in another volume on the same server. Once I get the data off the initial 12 drives they will be reconfigured, at which point I'll need to move everything from the 2nd volume to the aforementioned 12 logical drives on the first volume. This is being done both to free up the 2nd volume and to better utilize raid 10.\n> \n> I checked around and found a way to create sql statements to alter the public tablespaces and indexes, but I haven't found anything that provides information about moving the numerous associated config files, log files, etc. \n> \n> ANY comments, suggestions, or direction to existing documentation would be greatly appreciated. \n\n1. back everything up.\n\n2. create a bunch of directories on the RAID10 to match the existing\ntablespaces (they won't be mounts, but Postgres doesn't care about that).\n\n3. shut down postgres\n\n4. copy all your files to the new directories\n\n5. change your mount points which were in use by the old tablespaces to\nsymlinks which point at the new diretories\n\n6. start postgres back up from the new location\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n",
"msg_date": "Wed, 20 Jun 2012 17:28:38 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving tables"
},
{
"msg_contents": "Josh Berkus wrote:\r\n> On 6/20/12 3:27 PM, Midge Brown wrote:\r\n>> I need to move a postgres 9.0 database -- with tables, indexes, and wals associated with 16\r\n>> tablespaces on 12 logical drives -- to an existing raid 10 drive in another volume on the same server.\r\n>> Once I get the data off the initial 12 drives they will be reconfigured, at which point I'll need to\r\n>> move everything from the 2nd volume to the aforementioned 12 logical drives on the first volume. This\r\n>> is being done both to free up the 2nd volume and to better utilize raid 10.\r\n>>\r\n>> I checked around and found a way to create sql statements to alter the public tablespaces and\r\n>> indexes, but I haven't found anything that provides information about moving the numerous associated\r\n>> config files, log files, etc.\r\n>>\r\n>> ANY comments, suggestions, or direction to existing documentation would be greatly appreciated.\r\n\r\n> 1. back everything up.\r\n> \r\n> 2. create a bunch of directories on the RAID10 to match the existing\r\n> tablespaces (they won't be mounts, but Postgres doesn't care about that).\r\n> \r\n> 3. shut down postgres\r\n> \r\n> 4. copy all your files to the new directories\r\n> \r\n> 5. change your mount points which were in use by the old tablespaces to\r\n> symlinks which point at the new diretories\r\n> \r\n> 6. start postgres back up from the new location\r\n\r\nShouldn't you also\r\n\r\n7. UPDATE spclocation in pg_tablespace ?\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Thu, 21 Jun 2012 10:34:38 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving tables"
},
{
"msg_contents": "Last night I created directories and moved files as outlined in Josh's very helpful reply to my original request. All seemed okay until we unmounted the drives from the first volume. I got the following error (with oid differences) whenever I tried to access any of the tables that were not originally on the 2nd volume raid 10:\n\nERROR: could not open file \"pg_tblspc/18505/PG_9.0_201008051/99644466/99645029\": No such file or directory\n\nWhen I looked at the files in the linked directories on the raid 10, it appeared that the oid (18505 in the above error) was missing. After we remounted the drives so that access could be restored, it occurred to me that I should have altered the tablespaces to match the move to the 2nd volume. Would that have dealt with the error I saw? \n\nOn further reflection, it seems that the best course of action would be to have only the one tablespace on the existing raid 10 drive that resides on the 2nd volume. Then the first volume can be reconfigured into one raid 10 and I could move everything to it and the 2nd volume can physically be removed for use in another server that I can configure as a hot standby. \n\nDoes this plan make sense? Any comments or suggestions are welcome. \n\nThanks,\nMidge\n ----- Original Message ----- \n From: Josh Berkus \n To: [email protected] \n Sent: Wednesday, June 20, 2012 5:28 PM\n Subject: Re: [PERFORM] moving tables\n\n\n On 6/20/12 3:27 PM, Midge Brown wrote:\n > I need to move a postgres 9.0 database -- with tables, indexes, and wals associated with 16 tablespaces on 12 logical drives -- to an existing raid 10 drive in another volume on the same server. Once I get the data off the initial 12 drives they will be reconfigured, at which point I'll need to move everything from the 2nd volume to the aforementioned 12 logical drives on the first volume. This is being done both to free up the 2nd volume and to better utilize raid 10.\n > \n > I checked around and found a way to create sql statements to alter the public tablespaces and indexes, but I haven't found anything that provides information about moving the numerous associated config files, log files, etc. \n > \n > ANY comments, suggestions, or direction to existing documentation would be greatly appreciated. \n\n 1. back everything up.\n\n 2. create a bunch of directories on the RAID10 to match the existing\n tablespaces (they won't be mounts, but Postgres doesn't care about that).\n\n 3. shut down postgres\n\n 4. copy all your files to the new directories\n\n 5. change your mount points which were in use by the old tablespaces to\n symlinks which point at the new diretories\n\n 6. start postgres back up from the new location\n\n -- \n Josh Berkus\n PostgreSQL Experts Inc.\n http://pgexperts.com\n\n\n\n -- \n Sent via pgsql-performance mailing list ([email protected])\n To make changes to your subscription:\n http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n \nLast night I created directories and moved files \nas outlined in Josh's very helpful reply to my original request. All seemed okay \nuntil we unmounted the drives from the first volume. I got the following \nerror (with oid differences) whenever I tried to access any of the tables that \nwere not originally on the 2nd volume raid 10:\n \nERROR: could not open file \n\"pg_tblspc/18505/PG_9.0_201008051/99644466/99645029\": No such file or \ndirectory\n \nWhen I looked at the files in the linked \ndirectories on the raid 10, it appeared that the oid (18505 in the above error) \nwas missing. After we remounted the drives so that access could be restored, it \noccurred to me that I should have altered the tablespaces to match the move to \nthe 2nd volume. Would that have dealt with the \nerror I saw? \n \n\nOn further reflection, it seems that the best \ncourse of action would be to have only the one tablespace on the existing raid \n10 drive that resides on the 2nd volume. Then the first volume can be \nreconfigured into one raid 10 and I could move everything to it and the 2nd \nvolume can physically be removed for use in another server that I can configure \nas a hot standby. \n \nDoes this plan make sense? Any comments or \nsuggestions are welcome. \n \nThanks,\nMidge\n\n----- Original Message ----- \nFrom:\nJosh Berkus\n\nTo: [email protected]\n\nSent: Wednesday, June 20, 2012 5:28 \n PM\nSubject: Re: [PERFORM] moving \ntables\nOn 6/20/12 3:27 PM, Midge Brown wrote:> I need to move a \n postgres 9.0 database -- with tables, indexes, and wals associated with 16 \n tablespaces on 12 logical drives -- to an existing raid 10 drive in another \n volume on the same server. Once I get the data off the initial 12 drives they \n will be reconfigured, at which point I'll need to move everything from the 2nd \n volume to the aforementioned 12 logical drives on the first volume. This is \n being done both to free up the 2nd volume and to better utilize raid \n 10.> > I checked around and found a way to create sql statements \n to alter the public tablespaces and indexes, but I haven't found anything that \n provides information about moving the numerous associated config files, log \n files, etc. > > ANY comments, suggestions, or direction to \n existing documentation would be greatly appreciated. 1. back \n everything up.2. create a bunch of directories on the RAID10 to match \n the existingtablespaces (they won't be mounts, but Postgres doesn't care \n about that).3. shut down postgres4. copy all your files to the \n new directories5. change your mount points which were in use by the \n old tablespaces tosymlinks which point at the new diretories6. \n start postgres back up from the new location-- Josh \n BerkusPostgreSQL Experts Inc.http://pgexperts.com-- \n Sent via pgsql-performance mailing list ([email protected])To \n make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 11 Jul 2012 10:25:26 -0700",
"msg_from": "\"Midge Brown\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: moving tables"
}
] |
[
{
"msg_contents": "I have two tables node and relationship. Each relationship record connects two nodes and has an application keys (unfortunately named) that can be used by the application to look-up a relationship and get from one node to the other.\n \nMy query uses a node id and a description of a relationship from the node, and must find the \"next\" relationship that the node has. It does this by finding all the relationships that could be \"next\", ordering them and then getting the first. \n \nDetails are below but I end up with 6896 candidates for \"next\". \n \nIf I'm reading the output correctly it takes 13.509 ms to apply the filter and another 7 ms or so to do the sort of the remaining 6896 nodes.\n \nHave tried many index combinations to try and improve the results. I suspect that with so many nodes to sort, postgresql will opt for heap scan rather than index. But why does it not use the IDX_order_sort_down_2 index for the sort?\n \nThanks,\nAndy\n \n \nDetails..........\n \nVersion\n-------\nPostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2, 64-bit\n\nTables\n------\nCREATE TABLE node ( \n node_id bigint NOT NULL,\n node_type int4 NOT NULL,\n c_state int4 NOT NULL,\n d_state int4 NOT NULL,\n sort_key bigint NOT NULL,\n permissions bytea NOT NULL,\n audit bytea NOT NULL,\n pkg_id bytea NULL,\n created timestamp NOT NULL\n);\n\nCREATE TABLE relationship ( \n rel_id bigint NOT NULL,\n rel_type integer NOT NULL,\n s_t_n bigint NOT NULL,\n t_s_n bigint NOT NULL,\n state integer NOT NULL,\n control integer NOT NULL,\n sort_key bigint NOT NULL,\n prime_key bytea NULL,\n prime_key_len integer NOT NULL,\n sec_key bytea NULL,\n sec_key_len integer NOT NULL,\n up_sort_key bigint NOT NULL,\n up_prime_key bytea NULL,\n up_prime_key_len integer NOT NULL,\n up_sec_key bytea NULL,\n up_sec_key_len integer NOT NULL,\n permissions bytea NOT NULL,\n t_s_n_type integer NOT NULL,\n created timestamp NOT NULL\n);\n\nConstraints\n-----------\n-- Primary keys \nALTER TABLE node ADD CONSTRAINT PK_node PRIMARY KEY (node_id);\n \nALTER TABLE relationship ADD CONSTRAINT PK_relationship PRIMARY KEY (rel_id);\n \n-- Foreign keys\nALTER TABLE relationship ADD CONSTRAINT FK_node_s FOREIGN KEY (s_t_n) REFERENCES node (node_id);\n \nALTER TABLE relationship ADD CONSTRAINT FK_node_n FOREIGN KEY (t_s_n) REFERENCES node (node_id);\n\n \nIndexes \n-------\nCREATE INDEX IDX_node_type ON node (node_type ASC) TABLESPACE ds_appex_ts_10\n;\nCREATE INDEX IDX_node_sort_key ON node (sort_key ASC) TABLESPACE ds_appex_ts_10\n;\nCREATE INDEX IDX_relationship_s_t_n ON relationship (s_t_n ASC) TABLESPACE ds_appex_ts_10 \n;\nCREATE INDEX IDX_relationship_t_s_n ON relationship (t_s_n ASC) TABLESPACE ds_appex_ts_10 \n;\nCREATE INDEX IDX_relationship_type ON relationship (rel_type ASC) TABLESPACE ds_appex_ts_10 \n;\nCREATE INDEX IDX_relationship_prime_key ON relationship (prime_key ASC) TABLESPACE ds_appex_ts_10 \n;\nCREATE INDEX IDX_relationship_u_prime_key ON relationship (up_prime_key ASC) TABLESPACE ds_appex_ts_10 \n;\nCREATE INDEX IDX_relationship_sec_key ON relationship (sec_key ASC) TABLESPACE ds_appex_ts_10 \n;\nCREATE INDEX IDX_order_first ON node(sort_key DESC, node_id DESC) TABLESPACE ds_appex_ts_10\n;\nCREATE INDEX IDX_order_sort_down_1 ON relationship(sort_key DESC, prime_key ASC NULLS FIRST, sec_key ASC NULLS FIRST) TABLESPACE ds_appex_ts_10\n;\nCREATE INDEX IDX_order_sort_down_2 ON relationship(sort_key DESC, prime_key ASC NULLS FIRST, sec_key DESC NULLS FIRST) TABLESPACE ds_appex_ts_10\n;\nCREATE INDEX IDX_order_sort_up ON relationship(up_sort_key DESC, up_prime_key ASC NULLS FIRST, up_sec_key ASC NULLS FIRST) TABLESPACE ds_appex_ts_10\n;\n \nQuery\n-----\nCREATE OR REPLACE FUNCTION sp_get_rel_sort_dup_sec_desc(in_rel_type1 integer, in_rel_type2 integer, in_node_type integer, in_own_guid bigint, in_prev_prime_key bytea, in_prev_prime_key_len integer, in_prev_sec_key bytea, in_prev_sec_key_len integer, in_prev_sort_key bigint, in_ctrl integer) RETURNS select_rel_holder AS\n'\ndeclare\nh select_rel_holder%rowtype;\n \nbegin\n SELECT INTO h r.rel_id, r.t_s_n, r.rel_type, r.sort_key,\n r.state,r.permissions, r.control, \n r.prime_key, r.prime_key_len, r.sec_key, r.sec_key_len, \n r.up_prime_key, r.up_prime_key_len, r.up_sec_key, r.up_sec_key_len\n FROM relationship r \n WHERE r.s_t_n = in_own_guid AND (r.rel_type = in_rel_type1 OR r.rel_type = in_rel_type2) \n AND\n (\n ( \n (\n r.prime_key > in_prev_prime_key\n OR\n ( r.prime_key = in_prev_prime_key AND r.sec_key < in_prev_sec_key)\n )\n AND\n r.sort_key = in_prev_sort_key\n )\n \n OR\n r.sort_key < in_prev_sort_key\n )\n AND t_s_n_type = in_node_type\n AND r.control >= in_ctrl\n ORDER BY sort_key DESC, prime_key ASC NULLS FIRST, sec_key DESC NULLS FIRST LIMIT 1;\n RETURN h;\nend\n'\nlanguage 'plpgsql' STABLE;\n \n \nEXPLAIN ANALYZE output\n-------------------------------\n Limit (cost=48.90..48.90 rows=1 width=89) (actual time=21.480..21.480 rows=1 loops=1)\n Output: rel_id, t_s_n, rel_type, sort_key, state, permissions, control, prime_key, prime_key_len, sec_key, sec_key_len, up_prime_key, up_prime_key_l\nen, up_sec_key, up_sec_key_len\n \n -> Sort (cost=48.90..48.90 rows=1 width=89) (actual time=21.479..21.479 rows=1 loops=1)\n Output: rel_id, t_s_n, rel_type, sort_key, state, permissions, control, prime_key, prime_key_len, sec_key, sec_key_len, up_prime_key, up_prime\n_key_len, up_sec_key, up_sec_key_len\n Sort Key: r.sort_key, r.prime_key, r.sec_key\n Sort Method: top-N heapsort Memory: 25kB\n \n -> Bitmap Heap Scan on public.relationship r (cost=3.39..48.89 rows=1 width=89) (actual time=1.034..13.509 rows=6986 loops=1)\n Output: rel_id, t_s_n, rel_type, sort_key, state, permissions, control, prime_key, prime_key_len, sec_key, sec_key_len, up_prime_key, up\n_prime_key_len, up_sec_key, up_sec_key_len\n Recheck Cond: (r.s_t_n = $4)\n Filter: ((r.control >= $10) AND (r.t_s_n_type = $3) AND ((r.rel_type = $1) OR (r.rel_type = $2)) AND ((((r.prime_key > $5) OR ((r.prime_\nkey = $5) AND (r.sec_key < $7))) AND (r.sort_key = $9)) OR (r.sort_key < $9)))\n \n -> Bitmap Index Scan on idx_relationship_s_t_n (cost=0.00..3.39 rows=18 width=0) (actual time=0.951..0.951 rows=6989 loops=1)\n Index Cond: (r.s_t_n = $4)\n \t\t \t \t\t \n\n\n\n\nI have two tables node and relationship. Each relationship record connects two nodes and has an application keys (unfortunately named) that can be used by the application to look-up a relationship and get from one node to the other.\n \nMy query uses a node id and a description of a relationship from the node, and must find the \"next\" relationship that the node has. It does this by finding all the relationships that could be \"next\", ordering them and then getting the first. \n \nDetails are below but I end up with 6896 candidates for \"next\". \n \nIf I'm reading the output correctly it takes 13.509 ms to apply the filter and another 7 ms or so to do the sort of the remaining 6896 nodes.\n \nHave tried many index combinations to try and improve the results. I suspect that with so many nodes to sort, postgresql will opt for heap scan rather than index. But why does it not use the IDX_order_sort_down_2 index for the sort?\n \nThanks,\nAndy\n \n \nDetails..........\n \nVersion-------PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2, 64-bit\nTables------\nCREATE TABLE node ( node_id bigint NOT NULL, node_type int4 NOT NULL, c_state int4 NOT NULL, d_state int4 NOT NULL, sort_key bigint NOT NULL, permissions bytea NOT NULL, audit bytea NOT NULL, pkg_id bytea NULL, created timestamp NOT NULL);\nCREATE TABLE relationship ( rel_id bigint NOT NULL, rel_type integer NOT NULL, s_t_n bigint NOT NULL, t_s_n bigint NOT NULL, state integer NOT NULL, control integer NOT NULL, sort_key bigint NOT NULL, prime_key bytea NULL, prime_key_len integer NOT NULL, sec_key bytea NULL, sec_key_len integer NOT NULL, up_sort_key bigint NOT NULL, up_prime_key bytea NULL, up_prime_key_len integer NOT NULL, up_sec_key bytea NULL, up_sec_key_len integer NOT NULL, permissions bytea NOT NULL, t_s_n_type integer NOT NULL, created timestamp NOT NULL);\nConstraints-----------\n-- Primary keys ALTER TABLE node ADD CONSTRAINT PK_node PRIMARY KEY (node_id); ALTER TABLE relationship ADD CONSTRAINT PK_relationship PRIMARY KEY (rel_id); \n-- Foreign keysALTER TABLE relationship ADD CONSTRAINT FK_node_s FOREIGN KEY (s_t_n) REFERENCES node (node_id);\n \nALTER TABLE relationship ADD CONSTRAINT FK_node_n FOREIGN KEY (t_s_n) REFERENCES node (node_id);\n Indexes -------CREATE INDEX IDX_node_type ON node (node_type ASC) TABLESPACE ds_appex_ts_10;CREATE INDEX IDX_node_sort_key ON node (sort_key ASC) TABLESPACE ds_appex_ts_10;CREATE INDEX IDX_relationship_s_t_n ON relationship (s_t_n ASC) TABLESPACE ds_appex_ts_10 ;CREATE INDEX IDX_relationship_t_s_n ON relationship (t_s_n ASC) TABLESPACE ds_appex_ts_10 ;CREATE INDEX IDX_relationship_type ON relationship (rel_type ASC) TABLESPACE ds_appex_ts_10 ;CREATE INDEX IDX_relationship_prime_key ON relationship (prime_key ASC) TABLESPACE ds_appex_ts_10 ;CREATE INDEX IDX_relationship_u_prime_key ON relationship (up_prime_key ASC) TABLESPACE ds_appex_ts_10 ;CREATE INDEX IDX_relationship_sec_key ON relationship (sec_key ASC) TABLESPACE ds_appex_ts_10 ;CREATE INDEX IDX_order_first ON node(sort_key DESC, node_id DESC) TABLESPACE ds_appex_ts_10;CREATE INDEX IDX_order_sort_down_1 ON relationship(sort_key DESC, prime_key ASC NULLS FIRST, sec_key ASC NULLS FIRST) TABLESPACE ds_appex_ts_10;CREATE INDEX IDX_order_sort_down_2 ON relationship(sort_key DESC, prime_key ASC NULLS FIRST, sec_key DESC NULLS FIRST) TABLESPACE ds_appex_ts_10;CREATE INDEX IDX_order_sort_up ON relationship(up_sort_key DESC, up_prime_key ASC NULLS FIRST, up_sec_key ASC NULLS FIRST) TABLESPACE ds_appex_ts_10; \nQuery-----CREATE OR REPLACE FUNCTION sp_get_rel_sort_dup_sec_desc(in_rel_type1 integer, in_rel_type2 integer, in_node_type integer, in_own_guid bigint, in_prev_prime_key bytea, in_prev_prime_key_len integer, in_prev_sec_key bytea, in_prev_sec_key_len integer, in_prev_sort_key bigint, in_ctrl integer) RETURNS select_rel_holder AS'declareh select_rel_holder%rowtype; begin SELECT INTO h r.rel_id, r.t_s_n, r.rel_type, r.sort_key, r.state,r.permissions, r.control, r.prime_key, r.prime_key_len, r.sec_key, r.sec_key_len, r.up_prime_key, r.up_prime_key_len, r.up_sec_key, r.up_sec_key_len FROM relationship r WHERE r.s_t_n = in_own_guid AND (r.rel_type = in_rel_type1 OR r.rel_type = in_rel_type2) AND ( ( ( r.prime_key > in_prev_prime_key OR ( r.prime_key = in_prev_prime_key AND r.sec_key < in_prev_sec_key) ) AND r.sort_key = in_prev_sort_key ) OR r.sort_key < in_prev_sort_key ) AND t_s_n_type = in_node_type AND r.control >= in_ctrl\n ORDER BY sort_key DESC, prime_key ASC NULLS FIRST, sec_key DESC NULLS FIRST LIMIT 1; RETURN h;end'language 'plpgsql' STABLE;\n \n \nEXPLAIN ANALYZE output------------------------------- Limit (cost=48.90..48.90 rows=1 width=89) (actual time=21.480..21.480 rows=1 loops=1) Output: rel_id, t_s_n, rel_type, sort_key, state, permissions, control, prime_key, prime_key_len, sec_key, sec_key_len, up_prime_key, up_prime_key_len, up_sec_key, up_sec_key_len\n \n -> Sort (cost=48.90..48.90 rows=1 width=89) (actual time=21.479..21.479 rows=1 loops=1) Output: rel_id, t_s_n, rel_type, sort_key, state, permissions, control, prime_key, prime_key_len, sec_key, sec_key_len, up_prime_key, up_prime_key_len, up_sec_key, up_sec_key_len Sort Key: r.sort_key, r.prime_key, r.sec_key Sort Method: top-N heapsort Memory: 25kB\n \n -> Bitmap Heap Scan on public.relationship r (cost=3.39..48.89 rows=1 width=89) (actual time=1.034..13.509 rows=6986 loops=1) Output: rel_id, t_s_n, rel_type, sort_key, state, permissions, control, prime_key, prime_key_len, sec_key, sec_key_len, up_prime_key, up_prime_key_len, up_sec_key, up_sec_key_len Recheck Cond: (r.s_t_n = $4) Filter: ((r.control >= $10) AND (r.t_s_n_type = $3) AND ((r.rel_type = $1) OR (r.rel_type = $2)) AND ((((r.prime_key > $5) OR ((r.prime_key = $5) AND (r.sec_key < $7))) AND (r.sort_key = $9)) OR (r.sort_key < $9)))\n \n -> Bitmap Index Scan on idx_relationship_s_t_n (cost=0.00..3.39 rows=18 width=0) (actual time=0.951..0.951 rows=6989 loops=1) Index Cond: (r.s_t_n = $4)",
"msg_date": "Thu, 21 Jun 2012 20:07:01 +0000",
"msg_from": "Andy Halsall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can I do better than this heapscan and sort?"
},
{
"msg_contents": "On Thu, Jun 21, 2012 at 3:07 PM, Andy Halsall <[email protected]> wrote:\n> I have two tables node and relationship. Each relationship record connects\n> two nodes and has an application keys (unfortunately named) that can be used\n> by the application to look-up a relationship and get from one node to the\n> other.\n>\n> My query uses a node id and a description of a relationship from the node,\n> and must find the \"next\" relationship that the node has. It does this by\n> finding all the relationships that could be \"next\", ordering them and then\n> getting the first.\n>\n> Details are below but I end up with 6896 candidates for \"next\".\n>\n> If I'm reading the output correctly it takes 13.509 ms to apply the filter\n> and another 7 ms or so to do the sort of the remaining 6896 nodes.\n>\n> Have tried many index combinations to try and improve the results. I suspect\n> that with so many nodes to sort, postgresql will opt for heap scan rather\n> than index. But why does it not use the IDX_order_sort_down_2 index for the\n> sort?\n>\n> Thanks,\n> Andy\n>\n>\n> Details..........\n>\n> Version\n> -------\n> PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2,\n> 64-bit\n>\n> Tables\n> ------\n> CREATE TABLE node (\n> node_id bigint NOT NULL,\n> node_type int4 NOT NULL,\n> c_state int4 NOT NULL,\n> d_state int4 NOT NULL,\n> sort_key bigint NOT NULL,\n> permissions bytea NOT NULL,\n> audit bytea NOT NULL,\n> pkg_id bytea NULL,\n> created timestamp NOT NULL\n> );\n>\n> CREATE TABLE relationship (\n> rel_id bigint NOT NULL,\n> rel_type integer NOT NULL,\n> s_t_n bigint NOT NULL,\n> t_s_n bigint NOT NULL,\n> state integer NOT NULL,\n> control integer NOT NULL,\n> sort_key bigint NOT NULL,\n> prime_key bytea NULL,\n> prime_key_len integer NOT NULL,\n> sec_key bytea NULL,\n> sec_key_len integer NOT NULL,\n> up_sort_key bigint NOT NULL,\n> up_prime_key bytea NULL,\n> up_prime_key_len integer NOT NULL,\n> up_sec_key bytea NULL,\n> up_sec_key_len integer NOT NULL,\n> permissions bytea NOT NULL,\n> t_s_n_type integer NOT NULL,\n> created timestamp NOT NULL\n> );\n>\n> Constraints\n> -----------\n> -- Primary keys\n> ALTER TABLE node ADD CONSTRAINT PK_node PRIMARY KEY (node_id);\n>\n> ALTER TABLE relationship ADD CONSTRAINT PK_relationship PRIMARY KEY\n> (rel_id);\n>\n> -- Foreign keys\n> ALTER TABLE relationship ADD CONSTRAINT FK_node_s FOREIGN KEY (s_t_n)\n> REFERENCES node (node_id);\n>\n> ALTER TABLE relationship ADD CONSTRAINT FK_node_n FOREIGN KEY (t_s_n)\n> REFERENCES node (node_id);\n>\n>\n> Indexes\n> -------\n> CREATE INDEX IDX_node_type ON node (node_type ASC) TABLESPACE ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_node_sort_key ON node (sort_key ASC) TABLESPACE\n> ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_relationship_s_t_n ON relationship (s_t_n ASC) TABLESPACE\n> ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_relationship_t_s_n ON relationship (t_s_n ASC) TABLESPACE\n> ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_relationship_type ON relationship (rel_type ASC) TABLESPACE\n> ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_relationship_prime_key ON relationship (prime_key ASC)\n> TABLESPACE ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_relationship_u_prime_key ON relationship (up_prime_key ASC)\n> TABLESPACE ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_relationship_sec_key ON relationship (sec_key ASC)\n> TABLESPACE ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_order_first ON node(sort_key DESC, node_id DESC) TABLESPACE\n> ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_order_sort_down_1 ON relationship(sort_key DESC, prime_key\n> ASC NULLS FIRST, sec_key ASC NULLS FIRST) TABLESPACE ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_order_sort_down_2 ON relationship(sort_key DESC, prime_key\n> ASC NULLS FIRST, sec_key DESC NULLS FIRST) TABLESPACE ds_appex_ts_10\n> ;\n> CREATE INDEX IDX_order_sort_up ON relationship(up_sort_key DESC,\n> up_prime_key ASC NULLS FIRST, up_sec_key ASC NULLS FIRST) TABLESPACE\n> ds_appex_ts_10\n> ;\n>\n> Query\n> -----\n> CREATE OR REPLACE FUNCTION sp_get_rel_sort_dup_sec_desc(in_rel_type1\n> integer, in_rel_type2 integer, in_node_type integer, in_own_guid bigint,\n> in_prev_prime_key bytea, in_prev_prime_key_len integer, in_prev_sec_key\n> bytea, in_prev_sec_key_len integer, in_prev_sort_key bigint, in_ctrl\n> integer) RETURNS select_rel_holder AS\n> '\n> declare\n> h select_rel_holder%rowtype;\n>\n> begin\n> SELECT INTO h r.rel_id, r.t_s_n, r.rel_type, r.sort_key,\n> r.state,r.permissions, r.control,\n> r.prime_key, r.prime_key_len, r.sec_key,\n> r.sec_key_len,\n> r.up_prime_key, r.up_prime_key_len, r.up_sec_key,\n> r.up_sec_key_len\n> FROM relationship r\n> WHERE r.s_t_n = in_own_guid AND (r.rel_type = in_rel_type1 OR\n> r.rel_type = in_rel_type2)\n> AND\n> (\n> (\n> (\n> r.prime_key > in_prev_prime_key\n> OR\n> ( r.prime_key = in_prev_prime_key AND r.sec_key <\n> in_prev_sec_key)\n> )\n> AND\n> r.sort_key = in_prev_sort_key\n> )\n>\n> OR\n> r.sort_key < in_prev_sort_key\n> )\n> AND t_s_n_type = in_node_type\n> AND r.control >= in_ctrl\n> ORDER BY sort_key DESC, prime_key ASC NULLS FIRST, sec_key DESC\n> NULLS FIRST LIMIT 1;\n> RETURN h;\n> end\n> '\n> language 'plpgsql' STABLE;\n>\n>\n> EXPLAIN ANALYZE output\n> -------------------------------\n> Limit (cost=48.90..48.90 rows=1 width=89) (actual\n> time=21.480..21.480 rows=1 loops=1)\n> Output: rel_id, t_s_n, rel_type, sort_key, state, permissions,\n> control, prime_key, prime_key_len, sec_key, sec_key_len, up_prime_key,\n> up_prime_key_l\n> en, up_sec_key, up_sec_key_len\n>\n> -> Sort (cost=48.90..48.90 rows=1 width=89) (actual\n> time=21.479..21.479 rows=1 loops=1)\n> Output: rel_id, t_s_n, rel_type, sort_key, state,\n> permissions, control, prime_key, prime_key_len, sec_key, sec_key_len,\n> up_prime_key, up_prime\n> _key_len, up_sec_key, up_sec_key_len\n> Sort Key: r.sort_key, r.prime_key, r.sec_key\n> Sort Method: top-N heapsort Memory: 25kB\n>\n> -> Bitmap Heap Scan on public.relationship r\n> (cost=3.39..48.89 rows=1 width=89) (actual time=1.034..13.509 rows=6986\n> loops=1)\n> Output: rel_id, t_s_n, rel_type, sort_key, state,\n> permissions, control, prime_key, prime_key_len, sec_key, sec_key_len,\n> up_prime_key, up\n> _prime_key_len, up_sec_key, up_sec_key_len\n> Recheck Cond: (r.s_t_n = $4)\n> Filter: ((r.control >= $10) AND (r.t_s_n_type = $3)\n> AND ((r.rel_type = $1) OR (r.rel_type = $2)) AND ((((r.prime_key > $5) OR\n> ((r.prime_\n> key = $5) AND (r.sec_key < $7))) AND (r.sort_key = $9)) OR (r.sort_key <\n> $9)))\n>\n> -> Bitmap Index Scan on idx_relationship_s_t_n\n> (cost=0.00..3.39 rows=18 width=0) (actual time=0.951..0.951 rows=6989\n> loops=1)\n> Index Cond: (r.s_t_n = $4)\n\nAbsolutely. You need to learn and master row-wise comparison. It was\nadded for exactly this purpose :-).\n\nSELECT * FROM foo WHERE (a,b,c) > (a1,b1,c1) ORDER BY a,b,c LIMIT k;\n\nwill be fully optimized if you have an index on a,b,c (a1,b1,c1 are\nthe last ones you read off). Be advised that if there is not a lot of\ncardinality on 'a', you may need to disable certain index plans to get\na good plan in some cases.\n\nmerlin\n",
"msg_date": "Tue, 26 Jun 2012 08:36:54 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can I do better than this heapscan and sort?"
},
{
"msg_contents": "On Tue, Jun 26, 2012 at 8:36 AM, Merlin Moncure <[email protected]> wrote:\n> On Thu, Jun 21, 2012 at 3:07 PM, Andy Halsall <[email protected]> wrote:\n>> I have two tables node and relationship. Each relationship record connects\n>> two nodes and has an application keys (unfortunately named) that can be used\n>> by the application to look-up a relationship and get from one node to the\n>> other.\n>>\n>> My query uses a node id and a description of a relationship from the node,\n>> and must find the \"next\" relationship that the node has. It does this by\n>> finding all the relationships that could be \"next\", ordering them and then\n>> getting the first.\n>>\n>> Details are below but I end up with 6896 candidates for \"next\".\n>>\n>> If I'm reading the output correctly it takes 13.509 ms to apply the filter\n>> and another 7 ms or so to do the sort of the remaining 6896 nodes.\n>>\n>> Have tried many index combinations to try and improve the results. I suspect\n>> that with so many nodes to sort, postgresql will opt for heap scan rather\n>> than index. But why does it not use the IDX_order_sort_down_2 index for the\n>> sort?\n>>\n>> Thanks,\n>> Andy\n>>\n>>\n>> Details..........\n>>\n>> Version\n>> -------\n>> PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2,\n>> 64-bit\n>>\n>> Tables\n>> ------\n>> CREATE TABLE node (\n>> node_id bigint NOT NULL,\n>> node_type int4 NOT NULL,\n>> c_state int4 NOT NULL,\n>> d_state int4 NOT NULL,\n>> sort_key bigint NOT NULL,\n>> permissions bytea NOT NULL,\n>> audit bytea NOT NULL,\n>> pkg_id bytea NULL,\n>> created timestamp NOT NULL\n>> );\n>>\n>> CREATE TABLE relationship (\n>> rel_id bigint NOT NULL,\n>> rel_type integer NOT NULL,\n>> s_t_n bigint NOT NULL,\n>> t_s_n bigint NOT NULL,\n>> state integer NOT NULL,\n>> control integer NOT NULL,\n>> sort_key bigint NOT NULL,\n>> prime_key bytea NULL,\n>> prime_key_len integer NOT NULL,\n>> sec_key bytea NULL,\n>> sec_key_len integer NOT NULL,\n>> up_sort_key bigint NOT NULL,\n>> up_prime_key bytea NULL,\n>> up_prime_key_len integer NOT NULL,\n>> up_sec_key bytea NULL,\n>> up_sec_key_len integer NOT NULL,\n>> permissions bytea NOT NULL,\n>> t_s_n_type integer NOT NULL,\n>> created timestamp NOT NULL\n>> );\n>>\n>> Constraints\n>> -----------\n>> -- Primary keys\n>> ALTER TABLE node ADD CONSTRAINT PK_node PRIMARY KEY (node_id);\n>>\n>> ALTER TABLE relationship ADD CONSTRAINT PK_relationship PRIMARY KEY\n>> (rel_id);\n>>\n>> -- Foreign keys\n>> ALTER TABLE relationship ADD CONSTRAINT FK_node_s FOREIGN KEY (s_t_n)\n>> REFERENCES node (node_id);\n>>\n>> ALTER TABLE relationship ADD CONSTRAINT FK_node_n FOREIGN KEY (t_s_n)\n>> REFERENCES node (node_id);\n>>\n>>\n>> Indexes\n>> -------\n>> CREATE INDEX IDX_node_type ON node (node_type ASC) TABLESPACE ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_node_sort_key ON node (sort_key ASC) TABLESPACE\n>> ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_relationship_s_t_n ON relationship (s_t_n ASC) TABLESPACE\n>> ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_relationship_t_s_n ON relationship (t_s_n ASC) TABLESPACE\n>> ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_relationship_type ON relationship (rel_type ASC) TABLESPACE\n>> ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_relationship_prime_key ON relationship (prime_key ASC)\n>> TABLESPACE ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_relationship_u_prime_key ON relationship (up_prime_key ASC)\n>> TABLESPACE ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_relationship_sec_key ON relationship (sec_key ASC)\n>> TABLESPACE ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_order_first ON node(sort_key DESC, node_id DESC) TABLESPACE\n>> ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_order_sort_down_1 ON relationship(sort_key DESC, prime_key\n>> ASC NULLS FIRST, sec_key ASC NULLS FIRST) TABLESPACE ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_order_sort_down_2 ON relationship(sort_key DESC, prime_key\n>> ASC NULLS FIRST, sec_key DESC NULLS FIRST) TABLESPACE ds_appex_ts_10\n>> ;\n>> CREATE INDEX IDX_order_sort_up ON relationship(up_sort_key DESC,\n>> up_prime_key ASC NULLS FIRST, up_sec_key ASC NULLS FIRST) TABLESPACE\n>> ds_appex_ts_10\n>> ;\n>>\n>> Query\n>> -----\n>> CREATE OR REPLACE FUNCTION sp_get_rel_sort_dup_sec_desc(in_rel_type1\n>> integer, in_rel_type2 integer, in_node_type integer, in_own_guid bigint,\n>> in_prev_prime_key bytea, in_prev_prime_key_len integer, in_prev_sec_key\n>> bytea, in_prev_sec_key_len integer, in_prev_sort_key bigint, in_ctrl\n>> integer) RETURNS select_rel_holder AS\n>> '\n>> declare\n>> h select_rel_holder%rowtype;\n>>\n>> begin\n>> SELECT INTO h r.rel_id, r.t_s_n, r.rel_type, r.sort_key,\n>> r.state,r.permissions, r.control,\n>> r.prime_key, r.prime_key_len, r.sec_key,\n>> r.sec_key_len,\n>> r.up_prime_key, r.up_prime_key_len, r.up_sec_key,\n>> r.up_sec_key_len\n>> FROM relationship r\n>> WHERE r.s_t_n = in_own_guid AND (r.rel_type = in_rel_type1 OR\n>> r.rel_type = in_rel_type2)\n>> AND\n>> (\n>> (\n>> (\n>> r.prime_key > in_prev_prime_key\n>> OR\n>> ( r.prime_key = in_prev_prime_key AND r.sec_key <\n>> in_prev_sec_key)\n>> )\n>> AND\n>> r.sort_key = in_prev_sort_key\n>> )\n>>\n>> OR\n>> r.sort_key < in_prev_sort_key\n>> )\n>> AND t_s_n_type = in_node_type\n>> AND r.control >= in_ctrl\n>> ORDER BY sort_key DESC, prime_key ASC NULLS FIRST, sec_key DESC\n>> NULLS FIRST LIMIT 1;\n>> RETURN h;\n>> end\n>> '\n>> language 'plpgsql' STABLE;\n>>\n>>\n>> EXPLAIN ANALYZE output\n>> -------------------------------\n>> Limit (cost=48.90..48.90 rows=1 width=89) (actual\n>> time=21.480..21.480 rows=1 loops=1)\n>> Output: rel_id, t_s_n, rel_type, sort_key, state, permissions,\n>> control, prime_key, prime_key_len, sec_key, sec_key_len, up_prime_key,\n>> up_prime_key_l\n>> en, up_sec_key, up_sec_key_len\n>>\n>> -> Sort (cost=48.90..48.90 rows=1 width=89) (actual\n>> time=21.479..21.479 rows=1 loops=1)\n>> Output: rel_id, t_s_n, rel_type, sort_key, state,\n>> permissions, control, prime_key, prime_key_len, sec_key, sec_key_len,\n>> up_prime_key, up_prime\n>> _key_len, up_sec_key, up_sec_key_len\n>> Sort Key: r.sort_key, r.prime_key, r.sec_key\n>> Sort Method: top-N heapsort Memory: 25kB\n>>\n>> -> Bitmap Heap Scan on public.relationship r\n>> (cost=3.39..48.89 rows=1 width=89) (actual time=1.034..13.509 rows=6986\n>> loops=1)\n>> Output: rel_id, t_s_n, rel_type, sort_key, state,\n>> permissions, control, prime_key, prime_key_len, sec_key, sec_key_len,\n>> up_prime_key, up\n>> _prime_key_len, up_sec_key, up_sec_key_len\n>> Recheck Cond: (r.s_t_n = $4)\n>> Filter: ((r.control >= $10) AND (r.t_s_n_type = $3)\n>> AND ((r.rel_type = $1) OR (r.rel_type = $2)) AND ((((r.prime_key > $5) OR\n>> ((r.prime_\n>> key = $5) AND (r.sec_key < $7))) AND (r.sort_key = $9)) OR (r.sort_key <\n>> $9)))\n>>\n>> -> Bitmap Index Scan on idx_relationship_s_t_n\n>> (cost=0.00..3.39 rows=18 width=0) (actual time=0.951..0.951 rows=6989\n>> loops=1)\n>> Index Cond: (r.s_t_n = $4)\n>\n> Absolutely. You need to learn and master row-wise comparison. It was\n> added for exactly this purpose :-).\n>\n> SELECT * FROM foo WHERE (a,b,c) > (a1,b1,c1) ORDER BY a,b,c LIMIT k;\n>\n> will be fully optimized if you have an index on a,b,c (a1,b1,c1 are\n> the last ones you read off). Be advised that if there is not a lot of\n> cardinality on 'a', you may need to disable certain index plans to get\n> a good plan in some cases.\n\nhm, one more point: I notice you are mixing ASC/DESC in the index\ndefinition. Try to avoid doing that: it will make index based paging\nof the table more difficult. If you have to, try transforming the\nvalues so that you can index all the fields ASC or DESC. This will\nalso fit easier into row-wise comparisons strategy although it will\nslow down insertion a bit.\n\nmerlin\n",
"msg_date": "Tue, 26 Jun 2012 08:42:34 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can I do better than this heapscan and sort?"
},
{
"msg_contents": "On Wed, Jun 27, 2012 at 10:40 AM, Andy Halsall <[email protected]> wrote:\n>\n>\n>> Date: Tue, 26 Jun 2012 08:42:34 -0500\n>> Subject: Re: [PERFORM] Can I do better than this heapscan and sort?\n>> From: [email protected]\n>> To: [email protected]\n>> CC: [email protected]\n>\n>>\n>> On Tue, Jun 26, 2012 at 8:36 AM, Merlin Moncure <[email protected]>\n>> wrote:\n>> > On Thu, Jun 21, 2012 at 3:07 PM, Andy Halsall <[email protected]>\n>> > wrote:\n>> >> I have two tables node and relationship. Each relationship record\n>> >> connects\n>> >> two nodes and has an application keys (unfortunately named) that can be\n>> >> used\n>> >> by the application to look-up a relationship and get from one node to\n>> >> the\n>> >> other.\n>> >>\n>> >> My query uses a node id and a description of a relationship from the\n>> >> node,\n>> >> and must find the \"next\" relationship that the node has. It does this\n>> >> by\n>> >> finding all the relationships that could be \"next\", ordering them and\n>> >> then\n>> >> getting the first.\n>> >>\n>> >> Details are below but I end up with 6896 candidates for \"next\".\n>> >>\n>> >> If I'm reading the output correctly it takes 13.509 ms to apply the\n>> >> filter\n>> >> and another 7 ms or so to do the sort of the remaining 6896 nodes.\n>> >>\n>> >> Have tried many index combinations to try and improve the results. I\n>> >> suspect\n>> >> that with so many nodes to sort, postgresql will opt for heap scan\n>> >> rather\n>> >> than index. But why does it not use the IDX_order_sort_down_2 index for\n>> >> the\n>> >> sort?\n>> >>\n>> >> Thanks,\n>> >> Andy\n>> >>\n>> >>\n>> >> Details..........\n>> >>\n>> >> Version\n>> >> -------\n>> >> PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n>> >> 4.5.2,\n>> >> 64-bit\n>> >>\n>> >> Tables\n>> >> ------\n>> >> CREATE TABLE node (\n>> >> node_id bigint NOT NULL,\n>> >> node_type int4 NOT NULL,\n>> >> c_state int4 NOT NULL,\n>> >> d_state int4 NOT NULL,\n>> >> sort_key bigint NOT NULL,\n>> >> permissions bytea NOT NULL,\n>> >> audit bytea NOT NULL,\n>> >> pkg_id bytea NULL,\n>> >> created timestamp NOT NULL\n>> >> );\n>> >>\n>> >> CREATE TABLE relationship (\n>> >> rel_id bigint NOT NULL,\n>> >> rel_type integer NOT NULL,\n>> >> s_t_n bigint NOT NULL,\n>> >> t_s_n bigint NOT NULL,\n>> >> state integer NOT NULL,\n>> >> control integer NOT NULL,\n>> >> sort_key bigint NOT NULL,\n>> >> prime_key bytea NULL,\n>> >> prime_key_len integer NOT NULL,\n>> >> sec_key bytea NULL,\n>> >> sec_key_len integer NOT NULL,\n>> >> up_sort_key bigint NOT NULL,\n>> >> up_prime_key bytea NULL,\n>> >> up_prime_key_len integer NOT NULL,\n>> >> up_sec_key bytea NULL,\n>> >> up_sec_key_len integer NOT NULL,\n>> >> permissions bytea NOT NULL,\n>> >> t_s_n_type integer NOT NULL,\n>> >> created timestamp NOT NULL\n>> >> );\n>> >>\n>> >> Constraints\n>> >> -----------\n>> >> -- Primary keys\n>> >> ALTER TABLE node ADD CONSTRAINT PK_node PRIMARY KEY (node_id);\n>> >>\n>> >> ALTER TABLE relationship ADD CONSTRAINT PK_relationship PRIMARY KEY\n>> >> (rel_id);\n>> >>\n>> >> -- Foreign keys\n>> >> ALTER TABLE relationship ADD CONSTRAINT FK_node_s FOREIGN KEY (s_t_n)\n>> >> REFERENCES node (node_id);\n>> >>\n>> >> ALTER TABLE relationship ADD CONSTRAINT FK_node_n FOREIGN KEY (t_s_n)\n>> >> REFERENCES node (node_id);\n>> >>\n>> >>\n>> >> Indexes\n>> >> -------\n>> >> CREATE INDEX IDX_node_type ON node (node_type ASC) TABLESPACE\n>> >> ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_node_sort_key ON node (sort_key ASC) TABLESPACE\n>> >> ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_relationship_s_t_n ON relationship (s_t_n ASC)\n>> >> TABLESPACE\n>> >> ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_relationship_t_s_n ON relationship (t_s_n ASC)\n>> >> TABLESPACE\n>> >> ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_relationship_type ON relationship (rel_type ASC)\n>> >> TABLESPACE\n>> >> ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_relationship_prime_key ON relationship (prime_key ASC)\n>> >> TABLESPACE ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_relationship_u_prime_key ON relationship (up_prime_key\n>> >> ASC)\n>> >> TABLESPACE ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_relationship_sec_key ON relationship (sec_key ASC)\n>> >> TABLESPACE ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_order_first ON node(sort_key DESC, node_id DESC)\n>> >> TABLESPACE\n>> >> ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_order_sort_down_1 ON relationship(sort_key DESC,\n>> >> prime_key\n>> >> ASC NULLS FIRST, sec_key ASC NULLS FIRST) TABLESPACE ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_order_sort_down_2 ON relationship(sort_key DESC,\n>> >> prime_key\n>> >> ASC NULLS FIRST, sec_key DESC NULLS FIRST) TABLESPACE ds_appex_ts_10\n>> >> ;\n>> >> CREATE INDEX IDX_order_sort_up ON relationship(up_sort_key DESC,\n>> >> up_prime_key ASC NULLS FIRST, up_sec_key ASC NULLS FIRST) TABLESPACE\n>> >> ds_appex_ts_10\n>> >> ;\n>> >>\n>> >> Query\n>> >> -----\n>> >> CREATE OR REPLACE FUNCTION sp_get_rel_sort_dup_sec_desc(in_rel_type1\n>> >> integer, in_rel_type2 integer, in_node_type integer, in_own_guid\n>> >> bigint,\n>> >> in_prev_prime_key bytea, in_prev_prime_key_len integer, in_prev_sec_key\n>> >> bytea, in_prev_sec_key_len integer, in_prev_sort_key bigint, in_ctrl\n>> >> integer) RETURNS select_rel_holder AS\n>> >> '\n>> >> declare\n>> >> h select_rel_holder%rowtype;\n>> >>\n>> >> begin\n>> >> SELECT INTO h r.rel_id, r.t_s_n, r.rel_type, r.sort_key,\n>> >> r.state,r.permissions, r.control,\n>> >> r.prime_key, r.prime_key_len, r.sec_key,\n>> >> r.sec_key_len,\n>> >> r.up_prime_key, r.up_prime_key_len, r.up_sec_key,\n>> >> r.up_sec_key_len\n>> >> FROM relationship r\n>> >> WHERE r.s_t_n = in_own_guid AND (r.rel_type = in_rel_type1 OR\n>> >> r.rel_type = in_rel_type2)\n>> >> AND\n>> >> (\n>> >> (\n>> >> (\n>> >> r.prime_key > in_prev_prime_key\n>> >> OR\n>> >> ( r.prime_key = in_prev_prime_key AND r.sec_key <\n>> >> in_prev_sec_key)\n>> >> )\n>> >> AND\n>> >> r.sort_key = in_prev_sort_key\n>> >> )\n>> >>\n>> >> OR\n>> >> r.sort_key < in_prev_sort_key\n>> >> )\n>> >> AND t_s_n_type = in_node_type\n>> >> AND r.control >= in_ctrl\n>> >> ORDER BY sort_key DESC, prime_key ASC NULLS FIRST, sec_key DESC\n>> >> NULLS FIRST LIMIT 1;\n>> >> RETURN h;\n>> >> end\n>> >> '\n>> >> language 'plpgsql' STABLE;\n>> >>\n>> >>\n>> >> EXPLAIN ANALYZE output\n>> >> -------------------------------\n>> >> Limit (cost=48.90..48.90 rows=1 width=89) (actual\n>> >> time=21.480..21.480 rows=1 loops=1)\n>> >> Output: rel_id, t_s_n, rel_type, sort_key, state,\n>> >> permissions,\n>> >> control, prime_key, prime_key_len, sec_key, sec_key_len, up_prime_key,\n>> >> up_prime_key_l\n>> >> en, up_sec_key, up_sec_key_len\n>> >>\n>> >> -> Sort (cost=48.90..48.90 rows=1 width=89) (actual\n>> >> time=21.479..21.479 rows=1 loops=1)\n>> >> Output: rel_id, t_s_n, rel_type, sort_key, state,\n>> >> permissions, control, prime_key, prime_key_len, sec_key, sec_key_len,\n>> >> up_prime_key, up_prime\n>> >> _key_len, up_sec_key, up_sec_key_len\n>> >> Sort Key: r.sort_key, r.prime_key, r.sec_key\n>> >> Sort Method: top-N heapsort Memory: 25kB\n>> >>\n>> >> -> Bitmap Heap Scan on public.relationship r\n>> >> (cost=3.39..48.89 rows=1 width=89) (actual time=1.034..13.509 rows=6986\n>> >> loops=1)\n>> >> Output: rel_id, t_s_n, rel_type, sort_key, state,\n>> >> permissions, control, prime_key, prime_key_len, sec_key, sec_key_len,\n>> >> up_prime_key, up\n>> >> _prime_key_len, up_sec_key, up_sec_key_len\n>> >> Recheck Cond: (r.s_t_n = $4)\n>> >> Filter: ((r.control >= $10) AND (r.t_s_n_type =\n>> >> $3)\n>> >> AND ((r.rel_type = $1) OR (r.rel_type = $2)) AND ((((r.prime_key > $5)\n>> >> OR\n>> >> ((r.prime_\n>> >> key = $5) AND (r.sec_key < $7))) AND (r.sort_key = $9)) OR (r.sort_key\n>> >> <\n>> >> $9)))\n>> >>\n>> >> -> Bitmap Index Scan on idx_relationship_s_t_n\n>> >> (cost=0.00..3.39 rows=18 width=0) (actual time=0.951..0.951 rows=6989\n>> >> loops=1)\n>> >> Index Cond: (r.s_t_n = $4)\n>> >\n>> > Absolutely. You need to learn and master row-wise comparison. It was\n>> > added for exactly this purpose :-).\n>> >\n>> > SELECT * FROM foo WHERE (a,b,c) > (a1,b1,c1) ORDER BY a,b,c LIMIT k;\n>> >\n>> > will be fully optimized if you have an index on a,b,c (a1,b1,c1 are\n>> > the last ones you read off). Be advised that if there is not a lot of\n>> > cardinality on 'a', you may need to disable certain index plans to get\n>> > a good plan in some cases.\n>>\n>> hm, one more point: I notice you are mixing ASC/DESC in the index\n>> definition. Try to avoid doing that: it will make index based paging\n>> of the table more difficult. If you have to, try transforming the\n>> values so that you can index all the fields ASC or DESC. This will\n>> also fit easier into row-wise comparisons strategy although it will\n>> slow down insertion a bit.\n>>\n>> merlin\n>\n> Thanks Merlin. I wasn't aware of the row-wise comparison stuff. It's tricky\n> in the case above though as I want to do (a > b) OR (a=b AND b < c).\n>\n> As it happens, the application needs to iterate the set of \"next\"\n> relationships, which as I had it meant establishing a new set on each\n> iteration (where new set = old set minus its first entry). This is what the\n> filter was doing in the above query. This is too expensive (as ~7000\n> iterations) and so have modified solution to cache the ordered set when\n> first established.\n>\n> Even without the filter I still need to order the return as above. Still\n> don't understand why it doesn't use the index for this.\n\npostgres isn't smart enough to convert the complex boolean expressions\nfor multiple field set ordering into a multi-column index lookup\nunless the row-wise comparison syntax is used.\n\nwhere (a,b) > (a1,b1)\ncan be written as boolean:\nwhere (a>a1) OR (a=a1 AND b>b1)\nalternate form is:\nwhere (a>=a1) AND (a>a1 OR b>b1)\n\nwhere (a,b,c) > (a1,b1,c1)\ncan be written as boolean:\nwhere (a>a1) OR (a=a1 AND b>b1) OR (a = b1 AND b1 AND c>c1)\nalternate form is:\nwhere (a>=a1) AND (a>a1 OR b>=b1) AND (a>a1 OR b>b1 OR c>=c1)\n\n\nin either case using boolean construction postgres will only use an\nindex on 'a' if it exists, but will fully optimize the row-wise case\nif it matches the index. therefore, making your query faster using\nrow-wise is going to be an exercise of making an index matching your\nset browsing that contains either all ASC or all DESC -- you can't mix\nand match. in order to do that some transformation of your values is\nin order -- either directly on the table itself or with functional\nindexes doing the transformation during index build. For integers we\ncan do that by multiplying by -1. other datatypes might have more\ncomplicated constructions but it's usually doable.\n\nall this is assuming btw that you can logically order your table with\na consistent ordering and you are trying to index lookups between some\ntwo points withing that ordering (perhaps sliding that windows as you\ncrawl the table)\n\nmerlin\n",
"msg_date": "Wed, 27 Jun 2012 11:49:02 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can I do better than this heapscan and sort?"
}
] |
[
{
"msg_contents": "Hi all,\n\nMay be I completely wrong but I always assumed that the access speed to the\narray element in PostgreSQL should be close to constant time.\nBut in tests I found that access speed degrade as O(N) of array size.\n\nTest case (performed on large not busy server with 1GB work_mem to ensure I\nworking with memory only):\n\nWITH\nt AS (SELECT ARRAY(SELECT * FROM generate_series(1,N)) AS _array)\nSELECT count((SELECT _array[i] FROM t)) FROM generate_series(1,10000) as\ng(i);\n\nResults for N between 1 and 10.000.000 (used locally connected psql with\n\\timing):\n\nN: Time:\n1 5.8ms\n10 5.8ms\n100 5.8ms\n1000 6.7ms\n--until there all reasonable\n5k 21ms\n10k 34ms\n50k 177ms\n100k 321ms\n500k 4100ms\n1M 8100ms\n2M 22000ms\n5M 61000ms\n10M 220000ms = 22ms to sinlge array element access.\n\n\nIs that behaviour is correct?\n\nPS: what I actually lookin for - constant fast access by position\ntuplestore for use with recursive queries and/or pl/pgsql, but without\nusing C programming.\n\n-- \nMaxim Boguk\nSenior Postgresql DBA.\n\nPhone RU: +7 910 405 4718\nPhone AU: +61 45 218 5678\n\nSkype: maxim.boguk\nJabber: [email protected]\nМойКруг: http://mboguk.moikrug.ru/\n\n\"People problems are solved with people.\nIf people cannot solve the problem, try technology.\nPeople will then wish they'd listened at the first stage.\"\n\nHi all,May be I completely wrong but I always assumed that the access speed to the array element in PostgreSQL should be close to constant time.But in tests I found that access speed degrade as O(N) of array size.\nTest case (performed on large not busy server with 1GB work_mem to ensure I working with memory only):WITH\nt AS (SELECT ARRAY(SELECT * FROM generate_series(1,N)) AS _array)\nSELECT count((SELECT _array[i] FROM t)) FROM generate_series(1,10000) as g(i);\nResults for N between 1 and 10.000.000 (used locally connected psql with \\timing):N: Time:1 5.8ms\n10 5.8ms100 5.8ms\n1000 6.7ms--until there all reasonable\n5k 21ms10k 34ms\n50k 177ms100k 321ms\n500k 4100ms\n1M 8100ms2M 22000ms5M 61000ms\n10M 220000ms = 22ms to sinlge array element access.Is that behaviour is correct?PS: what I actually lookin for - constant fast access by position tuplestore for use with recursive queries and/or pl/pgsql, but without using C programming.\n-- Maxim BogukSenior Postgresql DBA.Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678Skype: maxim.bogukJabber: [email protected]\n\nМойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Fri, 22 Jun 2012 17:02:32 +1000",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of a large array access by position (tested version\n 9.1.3)"
},
{
"msg_contents": "On 22/06/12 09:02, Maxim Boguk wrote:\n> Hi all,\n>\n> May be I completely wrong but I always assumed that the access speed \n> to the array element in PostgreSQL should be close to constant time.\n> But in tests I found that access speed degrade as O(N) of array size.\n>\n> Test case (performed on large not busy server with 1GB work_mem to \n> ensure I working with memory only):\n>\n> WITH\n> t AS (SELECT ARRAY(SELECT * FROM generate_series(1,N)) AS _array)\n> SELECT count((SELECT _array[i] FROM t)) FROM generate_series(1,10000) \n> as g(i);\n>\n> Results for N between 1 and 10.000.000 (used locally connected psql \n> with \\timing):\n>\n> N: Time:\n> 1 5.8ms\n> 10 5.8ms\n> 100 5.8ms\n> 1000 6.7ms\n> --until there all reasonable\n> 5k 21ms\n> 10k 34ms\n> 50k 177ms\n> 100k 321ms\n> 500k 4100ms\n> 1M 8100ms\n> 2M 22000ms\n> 5M 61000ms\n> 10M 220000ms = 22ms to sinlge array element access.\n>\n>\n> Is that behaviour is correct?\n>\n> PS: what I actually lookin for - constant fast access by position \n> tuplestore for use with recursive queries and/or pl/pgsql, but without \n> using C programming.\n\nDefault column storage is to \"compress it, and store in TOAST\" with \nlarge values.\nThis it what is causing the shift. Try to change the column storage of \nthe column\nto EXTERNAL instead and rerun the test.\n\nALTER TABLE <tablename> ALTER COLUMN <column name> SET STORAGE EXTERNAL\n\nDefault is EXTENDED which runs compression on it, which again makes it \nhard to\nposition into without reading and decompressing everything.\n\nhttp://www.postgresql.org/docs/9.1/static/sql-altertable.html\n\nLet us know what you get.?\n\nJesper\n\n\n\n\n\n\n\n On 22/06/12 09:02, Maxim Boguk wrote:\n Hi all,\n\n May be I completely wrong but I always assumed that the access\n speed to the array element in PostgreSQL should be close to\n constant time.\n But in tests I found that access speed degrade as O(N) of array\n size.\n\n Test case (performed on large not busy server with 1GB work_mem to\n ensure I working with memory only):\n\n WITH\n t AS (SELECT ARRAY(SELECT * FROM generate_series(1,N)) AS _array)\n SELECT count((SELECT _array[i] FROM t)) FROM\n generate_series(1,10000) as g(i);\n\n Results for N between 1 and 10.000.000 (used locally connected\n psql with \\timing):\n\nN: Time:\n1 5.8ms\n10 5.8ms\n100 5.8ms\n1000 6.7ms\n--until there all\n reasonable\n5k 21ms\n10k 34ms\n50k 177ms\n100k 321ms\n500k 4100ms\n\n 1M 8100ms\n2M 22000ms\n5M 61000ms\n10M 220000ms\n = 22ms to sinlge array element access.\n\n\n Is that behaviour is correct?\n\n PS: what I actually lookin for - constant fast access by position\n tuplestore for use with recursive queries and/or pl/pgsql, but\n without using C programming.\n\n\n Default column storage is to \"compress it, and store in TOAST\" with\n large values. \n This it what is causing the shift. Try to change the column storage\n of the column\n to EXTERNAL instead and rerun the test. \n\n ALTER TABLE <tablename> ALTER COLUMN <column name> SET\n STORAGE EXTERNAL\n\n Default is EXTENDED which runs compression on it, which again makes\n it hard to \n position into without reading and decompressing everything. \n\nhttp://www.postgresql.org/docs/9.1/static/sql-altertable.html\n\n Let us know what you get.? \n\n Jesper",
"msg_date": "Tue, 26 Jun 2012 07:03:26 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of a large array access by position (tested\n\tversion 9.1.3)"
},
{
"msg_contents": "\r\n>> On 22/06/12 09:02, Maxim Boguk wrote: \r\n\r\n>> May be I completely wrong but I always assumed that the access speed to the array element in PostgreSQL should be close to constant time.\r\n>> But in tests I found that access speed degrade as O(N) of array size.\r\n\r\n>> Is that behaviour is correct?\r\n\r\n\r\n> From: [email protected] On Behalf Of Jesper Krogh\r\n\r\n> Default column storage is to \"compress it, and store in TOAST\" with large values. \r\n> This it what is causing the shift. Try to change the column storage of the column\r\n> to EXTERNAL instead and rerun the test. \r\n\r\n\r\nHello,\r\n\r\nI've repeated your test in a simplified form:\r\nyou are right :-(\r\n\r\ncreate table t1 ( _array int[]);\r\nalter table t1 alter _array set storage external;\r\ninsert into t1 SELECT ARRAY(SELECT * FROM generate_series(1,50000));\r\n\r\ncreate table t2 ( _array int[]);\r\nalter table t2 alter _array set storage external;\r\ninsert into t2 SELECT ARRAY(SELECT * FROM generate_series(1,5000000));\r\n\r\nexplain analyze SELECT _array[1] FROM t1;\r\nTotal runtime: 0.125 ms\r\n\r\nexplain analyze SELECT _array[1] FROM t2;\r\nTotal runtime: 8.649 ms\r\n\r\n\r\nbest regards,\r\n\r\nMarc Mamin\r\n\r\n\r\n",
"msg_date": "Tue, 26 Jun 2012 09:53:07 +0200",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of a large array access by position (tested version\n\t9.1.3)"
},
{
"msg_contents": "2012/6/26 Marc Mamin <[email protected]>:\n>\n>>> On 22/06/12 09:02, Maxim Boguk wrote:\n>\n>>> May be I completely wrong but I always assumed that the access speed to the array element in PostgreSQL should be close to constant time.\n>>> But in tests I found that access speed degrade as O(N) of array size.\n>\n>>> Is that behaviour is correct?\n\nyes - access to n position means in postgresql - skip n-1 elements\n\nRegards\n\nPavel\n\n>\n>\n>> From: [email protected] On Behalf Of Jesper Krogh\n>\n>> Default column storage is to \"compress it, and store in TOAST\" with large values.\n>> This it what is causing the shift. Try to change the column storage of the column\n>> to EXTERNAL instead and rerun the test.\n>\n>\n> Hello,\n>\n> I've repeated your test in a simplified form:\n> you are right :-(\n>\n> create table t1 ( _array int[]);\n> alter table t1 alter _array set storage external;\n> insert into t1 SELECT ARRAY(SELECT * FROM generate_series(1,50000));\n>\n> create table t2 ( _array int[]);\n> alter table t2 alter _array set storage external;\n> insert into t2 SELECT ARRAY(SELECT * FROM generate_series(1,5000000));\n>\n> explain analyze SELECT _array[1] FROM t1;\n> Total runtime: 0.125 ms\n>\n> explain analyze SELECT _array[1] FROM t2;\n> Total runtime: 8.649 ms\n>\n>\n> best regards,\n>\n> Marc Mamin\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Jun 2012 10:04:22 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of a large array access by position (tested\n\tversion 9.1.3)"
},
{
"msg_contents": "\r\n\r\n> -----Original Message-----\r\n> From: Pavel Stehule [mailto:[email protected]]\r\n> \r\n> 2012/6/26 Marc Mamin <[email protected]>:\r\n> >\r\n> >>> On 22/06/12 09:02, Maxim Boguk wrote:\r\n> >\r\n> >>> May be I completely wrong but I always assumed that the access\r\n> speed to the array element in PostgreSQL should be close to constant\r\n> time.\r\n> >>> But in tests I found that access speed degrade as O(N) of array\r\n> size.\r\n> >\r\n> >>> Is that behaviour is correct?\r\n> \r\n> yes - access to n position means in postgresql - skip n-1 elements\r\n\r\n\r\nHmmm...\r\n\r\nhow many elements to be skipped here ?\r\n\r\nSELECT _array[1] FROM t2;\r\n\r\nI wonder if the time rather get spent in first retrieving the array itself before accessing its elements.\r\n\r\nregards,\r\n\r\nMarc Mamin\r\n\r\n> \r\n> Regards\r\n> \r\n> Pavel\r\n> \r\n> >\r\n> >\r\n> >> From: [email protected] On Behalf Of Jesper\r\n> Krogh\r\n> >\r\n> >> Default column storage is to \"compress it, and store in TOAST\" with\r\n> large values.\r\n> >> This it what is causing the shift. Try to change the column storage\r\n> of the column\r\n> >> to EXTERNAL instead and rerun the test.\r\n> >\r\n> >\r\n> > Hello,\r\n> >\r\n> > I've repeated your test in a simplified form:\r\n> > you are right :-(\r\n> >\r\n> > create table t1 ( _array int[]);\r\n> > alter table t1 alter _array set storage external;\r\n> > insert into t1 SELECT ARRAY(SELECT * FROM generate_series(1,50000));\r\n> >\r\n> > create table t2 ( _array int[]);\r\n> > alter table t2 alter _array set storage external;\r\n> > insert into t2 SELECT ARRAY(SELECT * FROM\r\n> generate_series(1,5000000));\r\n> >\r\n> > explain analyze SELECT _array[1] FROM t1;\r\n> > Total runtime: 0.125 ms\r\n> >\r\n> > explain analyze SELECT _array[1] FROM t2;\r\n> > Total runtime: 8.649 ms\r\n> >\r\n> >\r\n> > best regards,\r\n> >\r\n> > Marc Mamin\r\n> >\r\n> >\r\n> >\r\n> > --\r\n> > Sent via pgsql-performance mailing list (pgsql-\r\n> [email protected])\r\n> > To make changes to your subscription:\r\n> > http://www.postgresql.org/mailpref/pgsql-performance\r\n",
"msg_date": "Tue, 26 Jun 2012 10:19:45 +0200",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of a large array access by position (tested version\n\t9.1.3)"
},
{
"msg_contents": "2012/6/26 Marc Mamin <[email protected]>:\n>\n>\n>> -----Original Message-----\n>> From: Pavel Stehule [mailto:[email protected]]\n>>\n>> 2012/6/26 Marc Mamin <[email protected]>:\n>> >\n>> >>> On 22/06/12 09:02, Maxim Boguk wrote:\n>> >\n>> >>> May be I completely wrong but I always assumed that the access\n>> speed to the array element in PostgreSQL should be close to constant\n>> time.\n>> >>> But in tests I found that access speed degrade as O(N) of array\n>> size.\n>> >\n>> >>> Is that behaviour is correct?\n>>\n>> yes - access to n position means in postgresql - skip n-1 elements\n>\n>\n> Hmmm...\n>\n> how many elements to be skipped here ?\n\nthere are two independent stages:\n\na) detoast - loading and decompression (complete array is detoasted)\nb) access\n\nif you has very large arrays, then @a is significant\n\nRegards\n\nPavel\n\n\n>\n> SELECT _array[1] FROM t2;\n>\n> I wonder if the time rather get spent in first retrieving the array itself before accessing its elements.\n>\n> regards,\n>\n> Marc Mamin\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> >\n>> >\n>> >> From: [email protected] On Behalf Of Jesper\n>> Krogh\n>> >\n>> >> Default column storage is to \"compress it, and store in TOAST\" with\n>> large values.\n>> >> This it what is causing the shift. Try to change the column storage\n>> of the column\n>> >> to EXTERNAL instead and rerun the test.\n>> >\n>> >\n>> > Hello,\n>> >\n>> > I've repeated your test in a simplified form:\n>> > you are right :-(\n>> >\n>> > create table t1 ( _array int[]);\n>> > alter table t1 alter _array set storage external;\n>> > insert into t1 SELECT ARRAY(SELECT * FROM generate_series(1,50000));\n>> >\n>> > create table t2 ( _array int[]);\n>> > alter table t2 alter _array set storage external;\n>> > insert into t2 SELECT ARRAY(SELECT * FROM\n>> generate_series(1,5000000));\n>> >\n>> > explain analyze SELECT _array[1] FROM t1;\n>> > Total runtime: 0.125 ms\n>> >\n>> > explain analyze SELECT _array[1] FROM t2;\n>> > Total runtime: 8.649 ms\n>> >\n>> >\n>> > best regards,\n>> >\n>> > Marc Mamin\n>> >\n>> >\n>> >\n>> > --\n>> > Sent via pgsql-performance mailing list (pgsql-\n>> [email protected])\n>> > To make changes to your subscription:\n>> > http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 26 Jun 2012 10:22:41 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of a large array access by position (tested\n\tversion 9.1.3)"
},
{
"msg_contents": "2012/6/26 Maxim Boguk <[email protected]>:\n>\n>\n> On Tue, Jun 26, 2012 at 6:04 PM, Pavel Stehule <[email protected]>\n> wrote:\n>>\n>> 2012/6/26 Marc Mamin <[email protected]>:\n>> >\n>> >>> On 22/06/12 09:02, Maxim Boguk wrote:\n>> >\n>> >>> May be I completely wrong but I always assumed that the access speed\n>> >>> to the array element in PostgreSQL should be close to constant time.\n>> >>> But in tests I found that access speed degrade as O(N) of array size.\n>> >\n>> >>> Is that behaviour is correct?\n>>\n>> yes - access to n position means in postgresql - skip n-1 elements\n>>\n>\n> Hi,\n>\n> I understand what you mean, but in my test for all values of N test\n> performed access only to first 10000 elements of the array independent of\n> the array size....\n> So i still can't see why access time degrade soo fast for N>10000...:\n>\n> WITH\n> --GENERATE single entry table with single ARRAY field of size N\n> t AS (SELECT ARRAY(SELECT * FROM generate_series(1,N)) AS _array)\n> --iterate over first 10000 elements of that ARRAY\n> SELECT count((SELECT _array[i] FROM t)) FROM generate_series(1,10000) as\n> g(i);\n>\n> ... if access time depend only on position then after 10k there should not\n> be any serious degradation, in fact a perfromance degradation is almost\n> linear.\n> 10k 34ms\n> 50k 177ms\n> 100k 321ms\n> 500k 4100ms\n> 1M 8100ms\n> 2M 22000ms\n> 5M 61000ms\n> 10M 220000ms = 22ms to sinlge array element access.\n\nin this use case TOAST/DETOAST is not used, but probably there are\nproblem with array copy. Taking n element of array means calling some\nfunction, and postgresql uses only passing parameters by value - so\ncopy of large array needs higher time.\n\nthis issue is solved partially in 9.2, where you can use FOR EACH\ncycle http://www.postgresql.org/docs/9.1/interactive/plpgsql-control-structures.html#PLPGSQL-FOREACH-ARRAY\n\nRegards\n\nPavel\n\n>\n> And I think no toasting/detoasting happen in my test case.\n>\n> Kind Regards,\n> Maksym\n",
"msg_date": "Tue, 26 Jun 2012 11:03:14 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of a large array access by position (tested\n\tversion 9.1.3)"
},
{
"msg_contents": "> >> >>> Is that behaviour is correct?\n> >> \n> >> yes - access to n position means in postgresql - skip n-1 elements\n> > \n> > Hmmm...\n> > \n> > how many elements to be skipped here ?\n> \n> there are two independent stages:\n> \n> a) detoast - loading and decompression (complete array is detoasted)\n> b) access\n> \n> if you has very large arrays, then @a is significant\n\nThere is a place to add PG_GETARG_ARRAY_P_SLICE. The code is just not done \nyet. \n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation",
"msg_date": "Thu, 28 Jun 2012 12:22:35 +0200",
"msg_from": "=?utf-8?q?C=C3=A9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of a large array access by position (tested version\n\t9.1.3)"
}
] |
[
{
"msg_contents": "I'm trying to work through a root cause on a performance problem. I'd like to \nbe able to \"show\" that a problem was fixed by analyzing the table.\n\nwhat i've done is\nset default_statistics_target=1\nanalyze <Table>\n\nThat gets rid of most of the rows in pg_stats, but i'm still getting decent performance.\n\nIt's possible that the existing stats were just not optimal, and i won't be able to get that back.\n\nBut I just want to verify that what i've done is the only real option that I have? am i missing anything\nelse that I could try?\n\n(I'm on PG9.1)\n\nThanks.\n\nDave\n",
"msg_date": "Fri, 22 Jun 2012 10:07:56 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Drop statistics?"
},
{
"msg_contents": "David Kerr <[email protected]> writes:\n> I'm trying to work through a root cause on a performance problem. I'd like to\n> be able to \"show\" that a problem was fixed by analyzing the table.\n\n> what i've done is\n> set default_statistics_target=1\n> analyze <Table>\n\n> That gets rid of most of the rows in pg_stats, but i'm still getting decent performance.\n\nI usually do something like\n\nDELETE FROM pg_statistic WHERE starelid = 'foo'::regclass;\n\n(you need to be superuser to be allowed to do this).\n\nYou may need to keep an eye on whether auto-analyze is coming along and\nundoing what you did, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Jun 2012 13:27:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Drop statistics?"
},
{
"msg_contents": "On Fri, Jun 22, 2012 at 01:27:51PM -0400, Tom Lane wrote:\n- David Kerr <[email protected]> writes:\n- > I'm trying to work through a root cause on a performance problem. I'd like to\n- > be able to \"show\" that a problem was fixed by analyzing the table.\n- \n- > what i've done is\n- > set default_statistics_target=1\n- > analyze <Table>\n- \n- > That gets rid of most of the rows in pg_stats, but i'm still getting decent performance.\n- \n- I usually do something like\n- \n- DELETE FROM pg_statistic WHERE starelid = 'foo'::regclass;\n- \n- (you need to be superuser to be allowed to do this).\n- \n- You may need to keep an eye on whether auto-analyze is coming along and\n- undoing what you did, too.\n- \n- \t\t\tregards, tom lane\n- \n\nAwesome, thanks!\n\nDave\n",
"msg_date": "Fri, 22 Jun 2012 11:04:36 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Drop statistics?"
},
{
"msg_contents": "On Fri, Jun 22, 2012 at 11:04:36AM -0700, David Kerr wrote:\n> On Fri, Jun 22, 2012 at 01:27:51PM -0400, Tom Lane wrote:\n> - David Kerr <[email protected]> writes:\n> - > I'm trying to work through a root cause on a performance problem. I'd like to\n> - > be able to \"show\" that a problem was fixed by analyzing the table.\n> - \n> - > what i've done is\n> - > set default_statistics_target=1\n> - > analyze <Table>\n> - \n> - > That gets rid of most of the rows in pg_stats, but i'm still getting decent performance.\n> - \n> - I usually do something like\n> - \n> - DELETE FROM pg_statistic WHERE starelid = 'foo'::regclass;\n> - \n> - (you need to be superuser to be allowed to do this).\n> - \n> - You may need to keep an eye on whether auto-analyze is coming along and\n> - undoing what you did, too.\n> - \n> - \t\t\tregards, tom lane\n> - \n> \n> Awesome, thanks!\n\nOne cool trick I have seen is to do the DELETE pg_statistic in a multi-statement\ntransaction and then run query query, and roll it back. This allows the\nstatistics to be preserved, and for only your query to see empty\npg_statistic values for the table.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Tue, 3 Jul 2012 13:16:14 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Drop statistics?"
},
{
"msg_contents": "On Jul 3, 2012, at 10:16 AM, Bruce Momjian wrote:\n\n> On Fri, Jun 22, 2012 at 11:04:36AM -0700, David Kerr wrote:\n>> On Fri, Jun 22, 2012 at 01:27:51PM -0400, Tom Lane wrote:\n>> - David Kerr <[email protected]> writes:\n>> - > I'm trying to work through a root cause on a performance problem. I'd like to\n>> - > be able to \"show\" that a problem was fixed by analyzing the table.\n>> - \n>> - > what i've done is\n>> - > set default_statistics_target=1\n>> - > analyze <Table>\n>> - \n>> - > That gets rid of most of the rows in pg_stats, but i'm still getting decent performance.\n>> - \n>> - I usually do something like\n>> - \n>> - DELETE FROM pg_statistic WHERE starelid = 'foo'::regclass;\n>> - \n>> - (you need to be superuser to be allowed to do this).\n>> - \n>> - You may need to keep an eye on whether auto-analyze is coming along and\n>> - undoing what you did, too.\n>> - \n>> - \t\t\tregards, tom lane\n>> - \n>> \n>> Awesome, thanks!\n> \n> One cool trick I have seen is to do the DELETE pg_statistic in a multi-statement\n> transaction and then run query query, and roll it back. This allows the\n> statistics to be preserved, and for only your query to see empty\n> pg_statistic values for the table.\n> \n\nNice! thanks!\n\nDave\n\n",
"msg_date": "Wed, 4 Jul 2012 22:53:57 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Drop statistics?"
}
] |
[
{
"msg_contents": "Howdy,\n\nI just restored a DB from a cold backup (pg_ctl stop -m fast)\n\nWhen starting the DB I see:\nLOG: corrupted statistics file \"global/pgstat.stat\"\n\nWhen I look at the filesystem I don't see a global/pgstat.stat file but i do see a \npg_stat_tmp/pgstat.stat\n\nis that PG rebuilding the corrupt file?\n\nAre those stats related to pg_stats style statistics?\n\nAny idea why that pgstats.stat file would be corrupt after a relativly clean backup like that?\n",
"msg_date": "Fri, 22 Jun 2012 11:46:32 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"global/pgstat.stat\" corrupt"
},
{
"msg_contents": "David Kerr <[email protected]> writes:\n> I just restored a DB from a cold backup (pg_ctl stop -m fast)\n\n> When starting the DB I see:\n> LOG: corrupted statistics file \"global/pgstat.stat\"\n\nIs that repeatable? It wouldn't be too surprising to see this once when\nstarting from a filesystem backup, if you'd managed to capture a\npartially written stats file in the backup.\n\n> When I look at the filesystem I don't see a global/pgstat.stat file but i do see a \n> pg_stat_tmp/pgstat.stat\n\nThis is normal --- global/pgstat.stat is only supposed to exist when the\nsystem is shut down. Transient updates are written into pg_stat_tmp/\n\n> Are those stats related to pg_stats style statistics?\n\nNo, this file is for the stats collection subsystem,\nhttp://www.postgresql.org/docs/9.1/static/monitoring-stats.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Jun 2012 15:49:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"global/pgstat.stat\" corrupt"
},
{
"msg_contents": "On Fri, Jun 22, 2012 at 03:49:00PM -0400, Tom Lane wrote:\n- David Kerr <[email protected]> writes:\n- > I just restored a DB from a cold backup (pg_ctl stop -m fast)\n- \n- > When starting the DB I see:\n- > LOG: corrupted statistics file \"global/pgstat.stat\"\n- \n- Is that repeatable? It wouldn't be too surprising to see this once when\n- starting from a filesystem backup, if you'd managed to capture a\n- partially written stats file in the backup.\n\nhmm, possibly. it's somewhat worrysome if that file isn't written out\nwhat else isn't getting written out.\n\nour backups are SAN based snapshots, maybe i need to \"sync\" prior to the snapshot\neven with the DB being down.\n\n- > When I look at the filesystem I don't see a global/pgstat.stat file but i do see a \n- > pg_stat_tmp/pgstat.stat\n- \n- This is normal --- global/pgstat.stat is only supposed to exist when the\n- system is shut down. Transient updates are written into pg_stat_tmp/\n\nah ok\n\n- > Are those stats related to pg_stats style statistics?\n- \n- No, this file is for the stats collection subsystem,\n- http://www.postgresql.org/docs/9.1/static/monitoring-stats.html\n- \n- \t\t\tregards, tom lane\n- \n\nOh, hmm thanks. I had looked at that before but must not have fully understood\nwhat was going on. That's interesting!\n\nDave\n",
"msg_date": "Fri, 22 Jun 2012 13:11:26 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"global/pgstat.stat\" corrupt"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a Java application that tries to synchronize tables in two databases\n(remote source to local target). It does so by removing all constraints,\nthen it compares table contents row by row, inserts missing rows and\ndeletes \"extra\" rows in the target database. Delete performance is\nincredibly bad: it handles 100 record deletes in about 16 to 20 seconds(!).\nInsert and update performance is fine.\n\nThe Java statement to handle the delete uses a prepared statement:\n\n\"delete from xxx where xxx_pk=?\"\n\nThe delete statement is then executed using addBatch() and executeBatch()\n(the latter every 100 deletes), and committed. Not using executeBatch makes\nno difference.\n\nAn example table where deletes are slow:\n\npzlnew=# \\d cfs_file\n Table \"public.cfs_file\"\n Column | Type | Modifiers\n------------------+-----------------------------+-----------\n cfsid | bigint | not null\n cfs_date_created | timestamp without time zone | not null\n cfs_name | character varying(512) | not null\n cfs_cfaid | bigint |\n cfs_cfdid | bigint |\nIndexes:\n \"cfs_file_pkey\" PRIMARY KEY, btree (cfsid)\n\nwith no FK constraints at all, and a table size of 940204 rows.\n\nWhile deleting, postgres takes 100% CPU all of the time.\n\n\nInserts and updates are handled in exactly the same way, and these are a\nfew orders of magnitude faster than the deletes.\n\nI am running the DB on an Ubuntu 12.04 - 64bits machine with Postgres 9.1,\nthe machine is a fast machine with the database on ssd, ext4, with 16GB of\nRAM and a i7-3770 CPU @ 3.40GHz.\n\nAnyone has any idea?\n\nThanks in advance,\n\nFrits\n\nHi,I have a Java application that tries to synchronize tables in two databases (remote source to local target). It does so by removing all constraints, then it compares table contents row by row, inserts missing rows and deletes \"extra\" rows in the target database. Delete performance is incredibly bad: it handles 100 record deletes in about 16 to 20 seconds(!). Insert and update performance is fine.\nThe Java statement to handle the delete uses a prepared statement:\"delete from xxx where xxx_pk=?\"The delete statement is then executed using addBatch() and executeBatch() (the latter every 100 deletes), and committed. Not using executeBatch makes no difference.\nAn example table where deletes are slow:pzlnew=# \\d cfs_file Table \"public.cfs_file\" Column | Type | Modifiers \n------------------+-----------------------------+----------- cfsid | bigint | not null cfs_date_created | timestamp without time zone | not null cfs_name | character varying(512) | not null\n cfs_cfaid | bigint | cfs_cfdid | bigint | Indexes: \"cfs_file_pkey\" PRIMARY KEY, btree (cfsid)\nwith no FK constraints at all, and a table size of 940204 rows.While deleting, postgres takes 100% CPU all of the time.Inserts and updates are handled in exactly the same way, and these are a few orders of magnitude faster than the deletes.\nI am running the DB on an Ubuntu 12.04 - 64bits machine with Postgres 9.1, the machine is a fast machine with the database on ssd, ext4, with 16GB of RAM and a i7-3770 CPU @ 3.40GHz.\nAnyone has any idea?Thanks in advance,Frits",
"msg_date": "Mon, 25 Jun 2012 17:42:27 +0200",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres delete performance problem"
},
{
"msg_contents": "Hello.\n\nThis may be wrong type for parameter, like using setObject(param, value) \ninstead of setObject(param, value, type). Especially if value passed is \nstring object. AFAIR index may be skipped in this case. You can check by \nchanging statement to \"delete from xxx where xxx_pk=?::bigint\". If it \nworks, check how parameter is set in java code.\n\n25.06.12 18:42, Frits Jalvingh написав(ла):\n> Hi,\n>\n> I have a Java application that tries to synchronize tables in two \n> databases (remote source to local target). It does so by removing all \n> constraints, then it compares table contents row by row, inserts \n> missing rows and deletes \"extra\" rows in the target database. Delete \n> performance is incredibly bad: it handles 100 record deletes in about \n> 16 to 20 seconds(!). Insert and update performance is fine.\n>\n> The Java statement to handle the delete uses a prepared statement:\n>\n> \"delete from xxx where xxx_pk=?\"\n>\n> The delete statement is then executed using addBatch() and \n> executeBatch() (the latter every 100 deletes), and committed. Not \n> using executeBatch makes no difference.\n>\n> An example table where deletes are slow:\n>\n> pzlnew=# \\d cfs_file\n> Table \"public.cfs_file\"\n> Column | Type | Modifiers\n> ------------------+-----------------------------+-----------\n> cfsid | bigint | not null\n> cfs_date_created | timestamp without time zone | not null\n> cfs_name | character varying(512) | not null\n> cfs_cfaid | bigint |\n> cfs_cfdid | bigint |\n> Indexes:\n> \"cfs_file_pkey\" PRIMARY KEY, btree (cfsid)\n>\n> with no FK constraints at all, and a table size of 940204 rows.\n>\n> While deleting, postgres takes 100% CPU all of the time.\n>\n>\n> Inserts and updates are handled in exactly the same way, and these are \n> a few orders of magnitude faster than the deletes.\n>\n> I am running the DB on an Ubuntu 12.04 - 64bits machine with Postgres \n> 9.1, the machine is a fast machine with the database on ssd, ext4, \n> with 16GB of RAM and a i7-3770 CPU @ 3.40GHz.\n>\n> Anyone has any idea?\n>\n> Thanks in advance,\n>\n> Frits\n>\n\n",
"msg_date": "Mon, 25 Jun 2012 18:52:04 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres delete performance problem"
},
{
"msg_contents": "\"It does so by removing all constraints, then it compares table contents row by row, inserts missing rows and deletes \"extra\" rows in the target database.\"\nIf the delete's you do when the constraints and indexes are removed then you need to create the constraints and indexes before you delete the rows\n\n\n\n\n\n>________________________________\n> De: Frits Jalvingh <[email protected]>\n>Para: [email protected] \n>Enviado: Lunes 25 de junio de 2012 10:42\n>Asunto: [PERFORM] Postgres delete performance problem\n> \n>\n>Hi,\n>\n>\n>I have a Java application that tries to synchronize tables in two databases (remote source to local target). It does so by removing all constraints, then it compares table contents row by row, inserts missing rows and deletes \"extra\" rows in the target database. Delete performance is incredibly bad: it handles 100 record deletes in about 16 to 20 seconds(!). Insert and update performance is fine.\n>\n>\n>The Java statement to handle the delete uses a prepared statement:\n>\n>\n>\"delete from xxx where xxx_pk=?\"\n>\n>\n>The delete statement is then executed using addBatch() and executeBatch() (the latter every 100 deletes), and committed. Not using executeBatch makes no difference.\n>\n>\n>An example table where deletes are slow:\n>\n>\n>pzlnew=# \\d cfs_file\n> Table \"public.cfs_file\"\n> Column | Type | Modifiers \n>------------------+-----------------------------+-----------\n> cfsid | bigint | not null\n> cfs_date_created | timestamp without time zone | not null\n> cfs_name | character varying(512) | not null\n> cfs_cfaid | bigint | \n> cfs_cfdid | bigint | \n>Indexes:\n> \"cfs_file_pkey\" PRIMARY KEY, btree (cfsid)\n>\n>\n>with no FK constraints at all, and a table size of 940204 rows.\n>\n>\n>While deleting, postgres takes 100% CPU all of the time.\n>\n>\n>\n>\n>Inserts and updates are handled in exactly the same way, and these are a few orders of magnitude faster than the deletes.\n>\n>\n>I am running the DB on an Ubuntu 12.04 - 64bits machine with Postgres 9.1, the machine is a fast machine with the database on ssd, ext4, with 16GB of RAM and a i7-3770 CPU @ 3.40GHz.\n>\n>\n>Anyone has any idea?\n>\n>\n>Thanks in advance,\n>\n>\n>Frits\n>\n>\n>\n>\n\"It does so by removing all constraints, then it compares table contents\n row by row, inserts missing rows and deletes \"extra\" rows in the target\n database.\"If the delete's you do when the constraints and indexes are removed then you need to create the constraints and indexes before you delete the rows De: Frits Jalvingh <[email protected]> Para: [email protected] Enviado: Lunes 25 de junio de 2012 10:42 Asunto: [PERFORM] Postgres delete performance problem\n Hi,I have a Java application that tries to synchronize tables in two databases (remote source to local target). It does so by removing all constraints, then it compares table contents row by row, inserts missing rows and deletes \"extra\" rows in the target database. Delete performance is incredibly bad: it handles 100 record deletes in about 16 to 20 seconds(!). Insert and update performance is fine.\nThe Java statement to handle the delete uses a prepared statement:\"delete from xxx where xxx_pk=?\"The delete statement is then executed using addBatch() and executeBatch() (the latter every 100 deletes), and committed. Not using executeBatch makes no difference.\nAn example table where deletes are slow:pzlnew=# \\d cfs_file Table \"public.cfs_file\" Column | Type | Modifiers \n------------------+-----------------------------+----------- cfsid | bigint | not null cfs_date_created | timestamp without time zone | not null cfs_name | character varying(512) | not null\n cfs_cfaid | bigint | cfs_cfdid | bigint | Indexes: \"cfs_file_pkey\" PRIMARY KEY, btree (cfsid)\nwith no FK constraints at all, and a table size of 940204 rows.While deleting, postgres takes 100% CPU all of the time.Inserts and updates are handled in exactly the same way, and these are a few orders of magnitude faster than the deletes.\nI am running the DB on an Ubuntu 12.04 - 64bits machine with Postgres 9.1, the machine is a fast machine with the database on ssd, ext4, with 16GB of RAM and a i7-3770 CPU @ 3.40GHz.\nAnyone has any idea?Thanks in advance,Frits",
"msg_date": "Mon, 25 Jun 2012 20:14:21 +0100 (BST)",
"msg_from": "Alejandro Carrillo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres delete performance problem"
},
{
"msg_contents": "Hello,\n\nJust wondering whether you were able to resolve this issue.\nWe are experiencing a very similar issue with deletes using Postgrs 9.0.5 on\nUbuntu 12.04.\n\nDan\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Postgres-delete-performance-problem-tp5714153p5738765.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 4 Jan 2013 09:49:59 -0800 (PST)",
"msg_from": "dankogan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres delete performance problem"
},
{
"msg_contents": "Yes, the issue was resolved by the method I proposed. You need to specify\ncorrect type either on java-side or server-side (query text).\nSee my explanation (it seems it got out of the list):\n\nThe driver does not parse your query, so it simply passes everything to\nserver.\nServer use widening conversion, so \"bigint=number\" becomes\n\"bigint::number=number\", not \"bigint=number::bigint\" and index can't be\nused when any function is applied to indexed field.\nNote, that server can't do \"bigint=number::bigint\" because it does not know\nthe numbers you will pass.\nConsider examples:\n1) 0 = 123456789012345678901234567890\n2) 0 = 0.4\nCorrect value is false, but \"bigint=number::bigint\" will give you\n\"overflow\" error for the first example and true for the second, which is\nincorrect.\n\n\n2013/1/4 dankogan <[email protected]>\n\n> Hello,\n>\n> Just wondering whether you were able to resolve this issue.\n> We are experiencing a very similar issue with deletes using Postgrs 9.0.5\n> on\n> Ubuntu 12.04.\n>\n> Dan\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Postgres-delete-performance-problem-tp5714153p5738765.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nYes, the issue was resolved by the method I proposed. You need to specify correct type either on java-side or server-side (query text).See my explanation (it seems it got out of the list):\nThe driver does not parse your query, so it simply passes everything to server.\nServer use widening conversion, so \"bigint=number\" becomes \"bigint::number=number\", not \"bigint=number::bigint\" and index can't be used when any function is applied to indexed field.\nNote, that server can't do \"bigint=number::bigint\" because it does not know the numbers you will pass.\nConsider examples:1) 0 = 123456789012345678901234567890\n2) 0 = 0.4Correct value is false, but \"bigint=number::bigint\" will give you \"overflow\" error for the first example and true for the second, which is incorrect.\n2013/1/4 dankogan <[email protected]>\nHello,\n\nJust wondering whether you were able to resolve this issue.\nWe are experiencing a very similar issue with deletes using Postgrs 9.0.5 on\nUbuntu 12.04.\n\nDan\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Postgres-delete-performance-problem-tp5714153p5738765.html\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Fri, 4 Jan 2013 20:05:28 +0200",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres delete performance problem"
}
] |
[
{
"msg_contents": "Any thoughts about this? It seems to be a new database system designed\nfrom scratch to take advantage of the growth in RAM size (data sets that\nfit in memory) and the availability of SSD drives. It claims to be \"the\nworld's fastest database.\"\n\nhttp://www.i-programmer.info/news/84-database/4397-memsql-80000-queries-per-second.html\n\nIt's hard to see at a glance if this is a robust system suitable for\nmonetary transactions, or just a fast-but-lossy system that you'd use for\nsocial twitter.\n\nCraig\n\nAny thoughts about this? It seems to be a new database system designed from scratch to take advantage of the growth in RAM size (data sets that fit in memory) and the availability of SSD drives. It claims to be \"the world's fastest database.\"\nhttp://www.i-programmer.info/news/84-database/4397-memsql-80000-queries-per-second.htmlIt's hard to see at a glance if this is a robust system suitable for monetary transactions, or just a fast-but-lossy system that you'd use for social twitter.\nCraig",
"msg_date": "Mon, 25 Jun 2012 09:25:53 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "MemSQL the \"world's fastest database\"?"
},
{
"msg_contents": "On 06/25/2012 11:25 AM, Craig James wrote:\n\n> Any thoughts about this? It seems to be a new database system designed\n> from scratch to take advantage of the growth in RAM size (data sets that\n> fit in memory) and the availability of SSD drives. It claims to be \"the\n> world's fastest database.\"\n\nI personally don't put a lot of stock into this. You can get 90k+ TPS \nfrom an old PostgreSQL 8.2 install if it's all in memory. High \ntransactional output itself isn't substantially difficult to achieve.\n\nI'm also not entirely certain how this is different from something like \nVoltDB, which also acts as an in-memory database with high TPS throughput.\n\nThen there's this from the article:\n\n\"The key ideas are that SQL code is translated into C++, so avoiding the \nneed to use a slow SQL interpreter, and that the data is kept in memory, \nwith disk read/writes taking place in the background.\"\n\nBesides the nonsense statement that SQL is translated to C++ (Lexical \nscanners would circumvent even this step, and does that mean you have to \nliterally compile the resulting C++? Ridiculous.) This violates at least \nthe 'D' tenet of ACID. Fine for transient Facebook data, but not going \nanywhere near our systems.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 25 Jun 2012 12:03:10 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MemSQL the \"world's fastest database\"?"
},
{
"msg_contents": "Craig James <[email protected]> wrote:\n \n> It claims to be \"the world's fastest database.\"\n \n> [link where they boast of 80,000 tps read-only]\n \n20,000 tps? Didn't we hit well over 300,000 tps in read-only\nbenchmarks of PostgreSQL with some of the 9.2 performance\nenhancements?\n \n-Kevin\n",
"msg_date": "Mon, 25 Jun 2012 12:23:11 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MemSQL the \"world's fastest database\"?"
},
{
"msg_contents": "On 6/25/12 10:23 AM, Kevin Grittner wrote:\n> Craig James <[email protected]> wrote:\n> \n>> It claims to be \"the world's fastest database.\"\n> \n>> [link where they boast of 80,000 tps read-only]\n> \n> 20,000 tps? Didn't we hit well over 300,000 tps in read-only\n> benchmarks of PostgreSQL with some of the 9.2 performance\n> enhancements?\n\nYes. The dirty truth is that there's nothing special, performance-wise,\nabout an \"in memory\" database except that it doesn't write to disk (or\nprotect your data from power-out).\n\nIn the early 00's people thought that you could build a database in some\nfundamentally different way if you started with the tenet that it was\n100% in memory. Hence RethinkDB, MySQL InMemory Tabletype, etc.\n\nAs it turns out, that doesn't change anything; you still need data\npages, indexes, sort routines, etc. etc. \"Disk\" databases don't operate\noff disk; they get copied to memory, so they're already effectively \"in\nmemory\".\n\nBTW, VoltDB's innovation is not being \"in memory\" (it can spill to\ndisk), but rather their innovative transactional clustering approach.\n\nThe new non-relational databases are \"fast\" on poor hardware (Amazon,\ndeveloper laptops) by cutting features and optimizing for poor (but\ncommon) access patterns. Not by being \"in memory\", which is just a\nside effect of not having spill-to-disk code.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n",
"msg_date": "Mon, 25 Jun 2012 10:41:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MemSQL the \"world's fastest database\"?"
},
{
"msg_contents": "\n\n---- Original message ----\n>Date: Mon, 25 Jun 2012 12:03:10 -0500\n>From: [email protected] (on behalf of Shaun Thomas <[email protected]>)\n>Subject: Re: [PERFORM] MemSQL the \"world's fastest database\"? \n>To: Craig James <[email protected]>\n>Cc: <[email protected]>\n>\n>On 06/25/2012 11:25 AM, Craig James wrote:\n>\n>> Any thoughts about this? It seems to be a new database system designed\n>> from scratch to take advantage of the growth in RAM size (data sets that\n>> fit in memory) and the availability of SSD drives. It claims to be \"the\n>> world's fastest database.\"\n>\n>I personally don't put a lot of stock into this. You can get 90k+ TPS \n>from an old PostgreSQL 8.2 install if it's all in memory. High \n>transactional output itself isn't substantially difficult to achieve.\n>\n>I'm also not entirely certain how this is different from something like \n>VoltDB, which also acts as an in-memory database with high TPS throughput.\n>\n>Then there's this from the article:\n>\n>\"The key ideas are that SQL code is translated into C++, so avoiding the \n>need to use a slow SQL interpreter, and that the data is kept in memory, \n>with disk read/writes taking place in the background.\"\n>\n>Besides the nonsense statement that SQL is translated to C++ (Lexical \n>scanners would circumvent even this step, and does that mean you have to \n>literally compile the resulting C++? Ridiculous.) This violates at least \n>the 'D' tenet of ACID. Fine for transient Facebook data, but not going \n>anywhere near our systems.\n\nDB2 on the mainframe (if memory serves), for one, will compile static SQL to machine code. Not that unusual.\n\nhttp://www.mainframegurukul.com/tutorials/database/db2_tutorials/DB2Precompilebind.html\n\nhttp://www.mainframegurukul.com/tutorials/database/db2_tutorials/sample-db2-cobol-compile-jcl.html\n\n\n>\n>-- \n>Shaun Thomas\n>OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n>312-444-8534\n>[email protected]\n>\n>______________________________________________\n>\n>See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 25 Jun 2012 14:02:41 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MemSQL the \"world's fastest\n database\"?"
},
{
"msg_contents": "<[email protected]> writes:\n>> Then there's this from the article:\n>> \n>> \"The key ideas are that SQL code is translated into C++, so avoiding the \n>> need to use a slow SQL interpreter, and that the data is kept in memory, \n>> with disk read/writes taking place in the background.\"\n>> \n>> Besides the nonsense statement that SQL is translated to C++ (Lexical \n>> scanners would circumvent even this step, and does that mean you have to \n>> literally compile the resulting C++? Ridiculous.) ...\n\n> DB2 on the mainframe (if memory serves), for one, will compile static SQL to machine code. Not that unusual.\n\nYeah. Actually such techniques go back at least to the fifties (look up\n\"sort generators\" sometime). They are out of fashion now because\n(1) the achievable speed difference isn't what it once was, and\n(2) programs that execute self-modified code are prone to seriously\nnasty security issues. Get any sort of control over the code generator,\nand you can happily execute anything you want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Jun 2012 16:15:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MemSQL the \"world's fastest database\"?"
},
{
"msg_contents": "On 06/25/2012 01:23 PM, Kevin Grittner wrote:\n> Craig James<[email protected]> wrote:\n>\n>> It claims to be \"the world's fastest database.\"\n>\n>> [link where they boast of 80,000 tps read-only]\n>\n> 20,000 tps? Didn't we hit well over 300,000 tps in read-only\n> benchmarks of PostgreSQL with some of the 9.2 performance\n> enhancements?\n\nIt's 20K TPS on something that MySQL will only do 3.5 TPS. The queries \nmust be much heavier than the ones PostgreSQL can get 200K+ on. We'd \nhave to do a deeper analysis of the actual queries used to know exactly \nhow much heavier though. They might be the type MySQL is usually faster \nthan PostgreSQL on (i.e. ones using simple operations and operators), or \nthey could be ones where PostgreSQL is usually faster than MySQL (i.e. \nmore complicated joins). All I can tell you for sure if that they used \na query mix that makes MemSQL look much faster than MySQL.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n",
"msg_date": "Sun, 01 Jul 2012 00:18:43 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MemSQL the \"world's fastest database\"?"
},
{
"msg_contents": "On Sat, Jun 30, 2012 at 10:18 PM, Greg Smith <[email protected]> wrote:\n> On 06/25/2012 01:23 PM, Kevin Grittner wrote:\n>>\n>> Craig James<[email protected]> wrote:\n>>\n>>> It claims to be \"the world's fastest database.\"\n>>\n>>\n>>> [link where they boast of 80,000 tps read-only]\n>>\n>>\n>> 20,000 tps? Didn't we hit well over 300,000 tps in read-only\n>> benchmarks of PostgreSQL with some of the 9.2 performance\n>> enhancements?\n>\n>\n> It's 20K TPS on something that MySQL will only do 3.5 TPS. The queries must\n> be much heavier than the ones PostgreSQL can get 200K+ on. We'd have to do\n> a deeper analysis of the actual queries used to know exactly how much\n> heavier though. They might be the type MySQL is usually faster than\n> PostgreSQL on (i.e. ones using simple operations and operators), or they\n> could be ones where PostgreSQL is usually faster than MySQL (i.e. more\n> complicated joins). All I can tell you for sure if that they used a query\n> mix that makes MemSQL look much faster than MySQL.\n\nConsidering I can build a pgsql 8.4 machine with 256G RAM and 64\nOpteron cores and a handful of SSDs or HW RAID that can do REAL 7k to\n8k RW TPS right now for well under $10k, 20k TPS on an in memory\ndatabase isn't all that impressive. I wonder what numbers pg 9.1/9.2\ncan / will be able to pull off on such hardare?\n",
"msg_date": "Sat, 30 Jun 2012 23:00:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MemSQL the \"world's fastest database\"?"
},
{
"msg_contents": "It sounds like a lot of marketing BS :)\n\nBut I like the fact that they use modern language like C++. It is a\npain to try doing any development on postgresql. Transition to c++\nwould be nice (I know it's been debated on #hackers a looot).\n",
"msg_date": "Tue, 3 Jul 2012 22:46:32 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MemSQL the \"world's fastest database\"?"
},
{
"msg_contents": "On 07/01/2012 01:00 AM, Scott Marlowe wrote:\n> Considering I can build a pgsql 8.4 machine with 256G RAM and 64\n> Opteron cores and a handful of SSDs or HW RAID that can do REAL 7k to\n> 8k RW TPS right now for well under $10k, 20k TPS on an in memory\n> database isn't all that impressive.\n\nAgain, their TPS numbers are useless without a contest of how big each \ntransaction is, and we don't know. I can take MemSQL seriously when \nthere's a press release describing how to replicate their benchmark \nindependently. Then it's useful to look at the absolute number.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n",
"msg_date": "Thu, 05 Jul 2012 21:45:42 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MemSQL the \"world's fastest database\"?"
}
] |
[
{
"msg_contents": "Hi,\n\nI've read this:\nhttp://wiki.postgresql.org/wiki/Prioritizing_databases_by_separating_into_multiple_clusters\n\nBut it doesn't really say anything about memory.\nIf i can fit an extra cluster into it's shared buffer, it should have fast\nreads, right?\nEven if i don't have seperate spindles and the disks are busy.\nThis is on a Debain server, postgres 8.4\n\nCheers,\n\nWBL\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nHi,I've read this:http://wiki.postgresql.org/wiki/Prioritizing_databases_by_separating_into_multiple_clusters\nBut it doesn't really say anything about memory.If i can fit an extra cluster into it's shared buffer, it should have fast reads, right?Even if i don't have seperate spindles and the disks are busy.\nThis is on a Debain server, postgres 8.4Cheers,WBL\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 27 Jun 2012 00:16:55 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "[performance] fast reads on a busy server"
},
{
"msg_contents": "On Wed, 2012-06-27 at 00:16 +0200, Willy-Bas Loos wrote:\n> Hi,\n> \n> I've read this:\n> http://wiki.postgresql.org/wiki/Prioritizing_databases_by_separating_into_multiple_clusters\n> \n> But it doesn't really say anything about memory.\n> If i can fit an extra cluster into it's shared buffer, it should have\n> fast reads, right?\n> Even if i don't have seperate spindles and the disks are busy.\n\nCheck if you are CPU-bound. On a database which fits fully you may\nalready be.\n\n> This is on a Debain server, postgres 8.4\n\nAnd if possible, upgrade to latest pg (9.1). On some operations this\nalready may give you a considerable performance boost\n\n> Cheers,\n> \n> WBL\n> -- \n> \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n> \n\n-- \n-------\nHannu Krosing\nPostgreSQL Unlimited Scalability and Performance Consultant\n2ndQuadrant Nordic\nPG Admin Book: http://www.2ndQuadrant.com/books/\n\n",
"msg_date": "Wed, 27 Jun 2012 09:34:16 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [performance] fast reads on a busy\n server"
},
{
"msg_contents": "On Wed, Jun 27, 2012 at 9:34 AM, Hannu Krosing <[email protected]>wrote:\n\n> Check if you are CPU-bound. On a database which fits fully you may\n> already be.\n>\n> Being CPU-bound is my goal.\nThat makes your answer a yes to me.\n\nOnly i'm afraid that this solution is not optimal.\nBecause i am stealing more resopurces from the (already busy) rest of the\nserver than necessary. That's because the file-cache will also be filled\n(partially) with data that this cluster uses, unnecessarily. I'm not going\nto read from the file cache, because the data will be in the shared_buffers\nas soon as they have been read from disk.\n\nSo then, would it be better to use 80% of memory for the shared buffers of\nthe combined clusters?\nI've read that 25% is good and 40% is max because of the file cache, but it\ndoesn't make much sense..\n\nGreg Smith writes\n(here<http://www.westnet.com/%7Egsmith/content/postgresql/InsideBufferCache.pdf>,\npage 12):\n* PostgreSQL is designed to rely heavily on the operating\nsystem cache, because portable sotware like PostgreSQL can’t\nknow enough about the filesystem or disk layout to make\noptimal decisions about how to read and write files\n* The shared buffer cache is really duplicating what the\noperating system is already doing: caching popular file blocks\n* In many cases, you’ll find exactly the same blocks cached by\nboth the buffer cache and the OS page cache\n* This makes is a bad idea to give PostgreSQL too much\nmemory to manage\n\nI cannot follow that reasoning completely. Who needs OS level file cache\nwhen postgres' shared_buffers is better? The efficiency should go up again\nafter passing 50% of shared buffers, where you would be caching everything\ntwice.\nThe only problem i see is that work_mem and such will end up in SWAP if\nthere isn't enough memory left over to allocate.\n\nCheers,\n\nWBL\n\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nOn Wed, Jun 27, 2012 at 9:34 AM, Hannu Krosing <[email protected]> wrote:\nCheck if you are CPU-bound. On a database which fits fully you may\nalready be.\nBeing CPU-bound is my goal.That makes your answer a yes to me.Only i'm afraid that this solution is not optimal. Because i am stealing more resopurces from the (already busy) rest of the server than necessary. That's because the file-cache will also be filled (partially) with data that this cluster uses, unnecessarily. I'm not going to read from the file cache, because the data will be in the shared_buffers as soon as they have been read from disk.\nSo then, would it be better to use 80% of memory for the shared buffers of the combined clusters?I've read that 25% is good and 40% is max because of the file cache, but it doesn't make much sense..\nGreg Smith writes (here, page 12):* PostgreSQL is designed to rely heavily on the operatingsystem cache, because portable sotware like PostgreSQL can’t\nknow enough about the filesystem or disk layout to makeoptimal decisions about how to read and write files* The shared buffer cache is really duplicating what theoperating system is already doing: caching popular file blocks\n* In many cases, you’ll find exactly the same blocks cached byboth the buffer cache and the OS page cache* This makes is a bad idea to give PostgreSQL too muchmemory to manageI cannot follow that reasoning completely. Who needs OS level file cache when postgres' shared_buffers is better? The efficiency should go up again after passing 50% of shared buffers, where you would be caching everything twice.\nThe only problem i see is that work_mem and such will end up in SWAP if there isn't enough memory left over to allocate.Cheers,WBL-- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 27 Jun 2012 12:01:51 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [performance] fast reads on a busy server"
},
{
"msg_contents": "On Wed, Jun 27, 2012 at 12:01 PM, Willy-Bas Loos <[email protected]> wrote:\n\n> I cannot follow that reasoning completely. Who needs OS level file cache\n> when postgres' shared_buffers is better? The efficiency should go up again\n> after passing 50% of shared buffers, where you would be caching everything\n> twice.\n> The only problem i see is that work_mem and such will end up in SWAP if\n> there isn't enough memory left over to allocate.\\\n\n\nThat is, 25% probably works best when there is only one cluster.\nI'm just wondering about this particular case:\n* more than 1 cluster on the machine, no separate file systems.\n* need fast writes on one cluster, so steal some memory to fit the DB in\nshared_buffers\n* now there is useless data in the OS file-cache\n\nShould i use a larger shared_buffers for the other cluster(s) too, so that\ni bypass the inefficient OS file-cache?\n\nCheers,\n\nWBL\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nOn Wed, Jun 27, 2012 at 12:01 PM, Willy-Bas Loos <[email protected]> wrote:\nI cannot follow that reasoning completely. Who needs OS level file cache when postgres' shared_buffers is better? The efficiency should go up again after passing 50% of shared buffers, where you would be caching everything twice.\n\nThe only problem i see is that work_mem and such will end up in SWAP if there isn't enough memory left over to allocate.\\That is, 25% probably works best when there is only one cluster.\nI'm just wondering about this particular case:* more than 1 cluster on the machine, no separate file systems.* need fast writes on one cluster, so steal some memory to fit the DB in shared_buffers\n* now there is useless data in the OS file-cache\n\nShould i use a larger shared_buffers for the other cluster(s) too, so that i bypass the inefficient OS file-cache?Cheers,WBL -- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 27 Jun 2012 13:28:39 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [performance] fast reads on a busy server"
},
{
"msg_contents": "On Wed, Jun 27, 2012 at 1:28 PM, Willy-Bas Loos <[email protected]> wrote:\n\n>\n> * need fast writes on one cluster, so steal some memory to fit the DB in\n> shared_buffers\n>\n> correction: READs, not writes. sry.\n\n\n-- \n\"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth\n\nOn Wed, Jun 27, 2012 at 1:28 PM, Willy-Bas Loos <[email protected]> wrote:\n* need fast writes on one cluster, so steal some memory to fit the DB in shared_buffers\ncorrection: READs, not writes. sry.-- \"Quality comes from focus and clarity of purpose\" -- Mark Shuttleworth",
"msg_date": "Wed, 27 Jun 2012 13:58:06 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [performance] fast reads on a busy server"
},
{
"msg_contents": "On Jun 27, 2012 2:29 PM, \"Willy-Bas Loos\" <[email protected]> wrote:\n> Should i use a larger shared_buffers for the other cluster(s) too, so\nthat i bypass the inefficient OS file-cache?\n\nOnce the in-memory cluster has filled its shared buffers, the pages go cold\nfor the OS cache and get replaced with pages of other clusters that are\nactually referenced.\n\nAnts Aasma\n\nOn Jun 27, 2012 2:29 PM, \"Willy-Bas Loos\" <[email protected]> wrote:\n> Should i use a larger shared_buffers for the other cluster(s) too, so that i bypass the inefficient OS file-cache?\nOnce the in-memory cluster has filled its shared buffers, the pages go cold for the OS cache and get replaced with pages of other clusters that are actually referenced.\nAnts Aasma",
"msg_date": "Wed, 27 Jun 2012 15:58:22 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] [performance] fast reads on a\n busy server"
},
{
"msg_contents": "Thank you.\n\nCheers,\nWBL\nOp 27 jun. 2012 14:59 schreef \"Ants Aasma\" <[email protected]> het volgende:\n\n> On Jun 27, 2012 2:29 PM, \"Willy-Bas Loos\" <[email protected]> wrote:\n> > Should i use a larger shared_buffers for the other cluster(s) too, so\n> that i bypass the inefficient OS file-cache?\n>\n> Once the in-memory cluster has filled its shared buffers, the pages go\n> cold for the OS cache and get replaced with pages of other clusters that\n> are actually referenced.\n>\n> Ants Aasma\n>\n\nThank you.\nCheers,\nWBL\nOp 27 jun. 2012 14:59 schreef \"Ants Aasma\" <[email protected]> het volgende:\nOn Jun 27, 2012 2:29 PM, \"Willy-Bas Loos\" <[email protected]> wrote:\n> Should i use a larger shared_buffers for the other cluster(s) too, so that i bypass the inefficient OS file-cache?\nOnce the in-memory cluster has filled its shared buffers, the pages go cold for the OS cache and get replaced with pages of other clusters that are actually referenced.\nAnts Aasma",
"msg_date": "Wed, 27 Jun 2012 15:23:58 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] [performance] fast reads on a\n busy server"
}
] |
[
{
"msg_contents": "Hello,\n\nI \"have\" this two tables:\n\n\nTable Cat:\nid|A|B\n--+-+-\n1|3|5\n2|5|8\n3|6|9\n\nTable Pr:\nid|Catid|\n--+-\n1|3\n2|2\n3|1\n\n\nI need replace \"Catid\" column for corresponding values A and B (Table \nCat) in Table Pr.\n\nVirtual table like this:\nTable Pr:\nid|Catid|A|B\n--+-+-+\n1|3|6|9\n2|2|5|8\n3|1|3|5\n\n\nSomething like this, but that works,...\n\nSELECT * FROM pr WHERE pr.a /*> 1 AND*/*//* \n<https://www.google.es/search?hl=es&sa=X&ei=ULbyT9uKGYSt0QWNy4CwCQ&ved=0CEUQvwUoAQ&q=between&spell=1> \npr.b < 10;\n\nWith subqueries is too slow:\nSELECT * FROM \"Pr\" AS p, (SELECT \"id\" AS cid FROM \"Cat\" WHERE \"lft\" > 1 \nAND \"rgt\" < 10) AS c WHERE p.\"Cat\"=c.\"cid\" AND (...)) ORDER BY \"Catid\" \nASC OFFSET 0 LIMIT 40\n\n\nAny suggestion?\n\n\n\n\n\n\n\n Hello,\n\n I \"have\" this two tables:\n\n\n Table Cat:\n id|A|B\n --+-+-\n 1|3|5\n 2|5|8\n 3|6|9\n\n Table Pr:\n id|Catid|\n --+-\n 1|3 \n 2|2 \n 3|1\n\n\n I need replace \"Catid\" column for corresponding values A and B (Table\n Cat) in Table Pr.\n\n Virtual table like this:\n Table Pr:\n id|Catid|A|B\n --+-+-+\n 1|3|6|9 \n 2|2|5|8 \n 3|1|3|5\n\n\n Something like this, but that works,...\n\n SELECT * FROM pr WHERE pr.a > 1 AND pr.b < 10;\n\n With subqueries is too slow:\n\nSELECT * FROM \"Pr\" AS p, (SELECT \"id\" AS cid FROM\n \"Cat\" WHERE \"lft\" > 1 AND \"rgt\" < 10) AS c WHERE\n p.\"Cat\"=c.\"cid\" AND (...)) ORDER BY \"Catid\" ASC OFFSET 0 LIMIT 40\n\n\n Any suggestion?",
"msg_date": "Tue, 03 Jul 2012 11:29:09 +0200",
"msg_from": "PV <[email protected]>",
"msg_from_op": true,
"msg_subject": "static virtual columns as result?"
}
] |
[
{
"msg_contents": "PV wrote:\n \n> Any suggestion?\n \nYou provided too little information to suggest much beyond using JOIN\ninstead of a subquery. Something like:\n \nSELECT pr.id, pr.catid, cat.a, cat.b\n FROM pr join cat ON (cat.id = pr.catid)\n WHERE \"lft\" > 1 AND \"rgt\" < 10 AND (...)\n ORDER BY cat.id\n OFFSET 0 LIMIT 40;\n \nWe can provide more specific suggestions if you follow the advice\nhere:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nAnd please format your queries to make them more readable --\nsomething like I did above.\n \n-Kevin\n",
"msg_date": "Tue, 03 Jul 2012 08:44:50 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: static virtual columns as result?"
},
{
"msg_contents": "El 03/07/12 15:44, Kevin Grittner escribi�:\n> You provided too little information to suggest much beyond using JOIN\n> instead of a subquery. Something like:\nI think that adding new columns to Product , lft and rgt with index \nshould be fast. But does not seem a good design.\n\n\nTables:\n#########################################\n#########################################\n-- Table: \"Category\"\nCREATE TABLE \"Category\"\n(\n id serial NOT NULL,\n...\n lft integer,\n rgt integer,\n...\n path ltree,\n description text NOT NULL,\n idxfti tsvector,\n...\nCONSTRAINT \"Category_pkey\" PRIMARY KEY (id ),\n)\nWITH (OIDS=FALSE);\nALTER TABLE \"Category\" OWNER TO root;\n\nCREATE INDEX \"Category_idxfti_idx\"\n ON \"Category\"\n USING gist (idxfti );\nCREATE INDEX \"Category_lftrgt_idx\"\n ON \"Category\"\n USING btree (lft , rgt );\n\n\nCREATE TRIGGER categorytsvectorupdate\n BEFORE INSERT OR UPDATE\n ON \"Category\"\n FOR EACH ROW\n EXECUTE PROCEDURE tsearch2('idxfti', 'description');\n\n####################################\n-- Table: \"Product\"\n\nCREATE TABLE \"Product\"\n(\n id serial NOT NULL,\n...\n description text NOT NULL,\n \"Category\" integer NOT NULL,\n...\n creationtime integer NOT NULL,\n...\n idxfti tsvector,\n...\n CONSTRAINT product_pkey PRIMARY KEY (id ),\n CONSTRAINT product_creationtime_check CHECK (creationtime >= 0),\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX \"Product_Category_idx\"\n ON \"Product\"\n USING btree\n (\"Category\" );\n\nCREATE INDEX \"Product_creationtime\"\n ON \"Product\"\n USING btree\n (creationtime );\n\nCREATE INDEX \"Product_idxfti_idx\"\n ON \"Product\"\n USING gist\n (idxfti );\n\nCREATE TRIGGER producttsvectorupdate\n BEFORE INSERT OR UPDATE\n ON \"Product\"\n FOR EACH ROW\n EXECUTE PROCEDURE tsearch2('idxfti','description');\n\n#################################\n#########################################\n\nQuery\n#########################################\n\nEXPLAIN ANALYZE\n SELECT * FROM \"Product\" AS p\n JOIN \"Category\"\n ON (\"Category\".id = p.\"Category\")\n WHERE \"lft\" BETWEEN 1 AND 792\n ORDER BY creationtime ASC\n OFFSET 0 LIMIT 40\n\n\n\"Limit (cost=2582.87..2582.97 rows=40 width=1688) (actual \ntime=4306.209..4306.328 rows=40 loops=1)\"\n\" -> Sort (cost=2582.87..2584.40 rows=615 width=1688) (actual \ntime=4306.205..4306.246 rows=40 loops=1)\"\n\" Sort Key: p.creationtime\"\n\" Sort Method: top-N heapsort Memory: 69kB\"\n\" -> Nested Loop (cost=31.21..2563.43 rows=615 width=1688) \n(actual time=0.256..3257.310 rows=122543 loops=1)\"\n\" -> Index Scan using \"Category_lftrgt_idx\" on \"Category\" \n(cost=0.00..12.29 rows=2 width=954) (actual time=0.102..18.598 rows=402 \nloops=1)\"\n\" Index Cond: ((lft >= 1) AND (lft <= 792))\"\n\" -> Bitmap Heap Scan on \"Product\" p (cost=31.21..1270.93 \nrows=371 width=734) (actual time=0.561..6.125 rows=305 loops=402)\"\n\" Recheck Cond: (\"Category\" = \"Category\".id)\"\n\" -> Bitmap Index Scan on \"Product_Category_idx\" \n(cost=0.00..31.12 rows=371 width=0) (actual time=0.350..0.350 rows=337 \nloops=402)\"\n\" Index Cond: (\"Category\" = \"Category\".id)\"\n\"Total runtime: 4306.706 ms\"\n\n\n#########################################\n\nEXPLAIN ANALYZE\n SELECT * FROM \"Product\" AS p\n WHERE (p.\"idxfti\" @@ to_tsquery('simple', \n'vpc'))\n ORDER BY creationtime ASC OFFSET 0 LIMIT 40\n\n\n\"Limit (cost=471.29..471.39 rows=40 width=734) (actual \ntime=262.854..262.971 rows=40 loops=1)\"\n\" -> Sort (cost=471.29..471.57 rows=113 width=734) (actual \ntime=262.850..262.890 rows=40 loops=1)\"\n\" Sort Key: creationtime\"\n\" Sort Method: top-N heapsort Memory: 68kB\"\n\" -> Bitmap Heap Scan on \"Product\" p (cost=49.62..467.72 \nrows=113 width=734) (actual time=258.502..262.322 rows=130 loops=1)\"\n\" Recheck Cond: (idxfti @@ '''vpc'''::tsquery)\"\n\" -> Bitmap Index Scan on \"Product_idxfti_idx\" \n(cost=0.00..49.60 rows=113 width=0) (actual time=258.340..258.340 \nrows=178 loops=1)\"\n\" Index Cond: (idxfti @@ '''vpc'''::tsquery)\"\n\"Total runtime: 263.177 ms\"\n\n#########################################\n\nAnd here is a big problem:\n\n\nEXPLAIN ANALYZE\n SELECT * FROM \"Product\" AS p\n JOIN \"Category\"\n ON (\"Category\".id = p.\"Category\")\n WHERE \"lft\" BETWEEN 1 AND 792 AND \n(p.\"idxfti\" @@ to_tsquery('simple', 'vpc'))\n ORDER BY creationtime ASC\n OFFSET 0 LIMIT 40\n\n\n\n\"Limit (cost=180.09..180.09 rows=1 width=1688) (actual \ntime=26652.316..26652.424 rows=40 loops=1)\"\n\" -> Sort (cost=180.09..180.09 rows=1 width=1688) (actual \ntime=26652.312..26652.350 rows=40 loops=1)\"\n\" Sort Key: p.creationtime\"\n\" Sort Method: top-N heapsort Memory: 96kB\"\n\" -> Nested Loop (cost=85.27..180.08 rows=1 width=1688) (actual \ntime=12981.612..26651.594 rows=130 loops=1)\"\n\" -> Bitmap Heap Scan on \"Category\" (cost=4.27..10.03 \nrows=2 width=954) (actual time=0.215..1.580 rows=402 loops=1)\"\n\" Recheck Cond: ((lft >= 1) AND (lft <= 792))\"\n\" -> Bitmap Index Scan on \"Category_lftrgt_idx\" \n(cost=0.00..4.27 rows=2 width=0) (actual time=0.193..0.193 rows=402 \nloops=1)\"\n\" Index Cond: ((lft >= 1) AND (lft <= 792))\"\n\" -> Bitmap Heap Scan on \"Product\" p (cost=81.00..85.01 \nrows=1 width=734) (actual time=66.276..66.280 rows=0 loops=402)\"\n\" Recheck Cond: ((\"Category\" = \"Category\".id) AND \n(idxfti @@ '''vpc'''::tsquery))\"\n\" -> BitmapAnd (cost=81.00..81.00 rows=1 width=0) \n(actual time=66.263..66.263 rows=0 loops=402)\"\n\" -> Bitmap Index Scan on \n\"Product_Category_idx\" (cost=0.00..31.12 rows=371 width=0) (actual \ntime=0.188..0.188 rows=337 loops=402)\"\n\" Index Cond: (\"Category\" = \"Category\".id)\"\n\" -> Bitmap Index Scan on \n\"Product_idxfti_idx\" (cost=0.00..49.60 rows=113 width=0) (actual \ntime=70.557..70.557 rows=178 loops=376)\"\n\" Index Cond: (idxfti @@ '''vpc'''::tsquery)\"\n\"Total runtime: 26652.772 ms\"\n\n#########################################\nSimilar query:\n\nEXPLAIN ANALYZE\n SELECT *FROM \"Product\" AS p,\n (SELECT \"id\" AS cid FROM \"Category\" WHERE \"lft\" BETWEEN 1 \nAND 792) AS c\n WHERE p.\"Category\"=c.\"cid\" AND (p.\"idxfti\" @@ \nto_tsquery('simple', 'vpc'))\n ORDER BY creationtime ASC\n OFFSET 0 LIMIT 40\n\"Limit (cost=180.09..180.09 rows=1 width=738) (actual \ntime=23530.598..23530.730 rows=40 loops=1)\"\n\" -> Sort (cost=180.09..180.09 rows=1 width=738) (actual \ntime=23530.593..23530.632 rows=40 loops=1)\"\n\" Sort Key: p.creationtime\"\n\" Sort Method: top-N heapsort Memory: 68kB\"\n\" -> Nested Loop (cost=85.27..180.08 rows=1 width=738) (actual \ntime=10523.533..23530.043 rows=130 loops=1)\"\n\" -> Bitmap Heap Scan on \"Category\" (cost=4.27..10.03 \nrows=2 width=4) (actual time=0.270..1.688 rows=402 loops=1)\"\n\" Recheck Cond: ((lft >= 1) AND (lft <= 792))\"\n\" -> Bitmap Index Scan on \"Category_lftrgt_idx\" \n(cost=0.00..4.27 rows=2 width=0) (actual time=0.246..0.246 rows=402 \nloops=1)\"\n\" Index Cond: ((lft >= 1) AND (lft <= 792))\"\n\" -> Bitmap Heap Scan on \"Product\" p (cost=81.00..85.01 \nrows=1 width=734) (actual time=58.512..58.516 rows=0 loops=402)\"\n\" Recheck Cond: ((\"Category\" = \"Category\".id) AND \n(idxfti @@ '''vpc'''::tsquery))\"\n\" -> BitmapAnd (cost=81.00..81.00 rows=1 width=0) \n(actual time=58.503..58.503 rows=0 loops=402)\"\n\" -> Bitmap Index Scan on \n\"Product_Category_idx\" (cost=0.00..31.12 rows=371 width=0) (actual \ntime=0.213..0.213 rows=337 loops=402)\"\n\" Index Cond: (\"Category\" = \"Category\".id)\"\n\" -> Bitmap Index Scan on \n\"Product_idxfti_idx\" (cost=0.00..49.60 rows=113 width=0) (actual \ntime=62.246..62.246 rows=178 loops=376)\"\n\" Index Cond: (idxfti @@ '''vpc'''::tsquery)\"\n\"Total runtime: 23531.079 ms\"\n\n\n",
"msg_date": "Tue, 03 Jul 2012 17:43:54 +0200",
"msg_from": "PV <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: static virtual columns as result?"
},
{
"msg_contents": "SOLVED:\n\n1) Try remove old functions tsearch, ltree,..., \"old\" database format\n2) Vacuum or rebuild database\n\nOne could solve the problem.\n\nRegards\n",
"msg_date": "Wed, 04 Jul 2012 12:35:47 +0200",
"msg_from": "PV <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: static virtual columns as result?"
}
] |
[
{
"msg_contents": "Hello,\n\nMy question below is almost exact copy of the on on SO:\nhttp://stackoverflow.com/questions/11311079/postgresql-db-30-tables-with-number-of-rows-100-not-huge-the-fastest-way\n\nThe post on SO caused a few answers, all as one stating \"DO ONLY TRUNCATION\n- this is the fast\".\n\nAlso I think I've met some amount of misunderstanding of what exactly do I\nwant. I would appreciate it great, if you try, as people whom I may trust\nin performance question.\n\nHere goes the SO subject, formulating exact task I want to accomplish, this\nprocedure is intended to be run beetween after or before each test, ensure\ndatabase is cleaned enough and has reset unique identifiers column (User.id\nof the first User should be nor the number left from previous test in a\ntest suite but 1). Here goes the message:\n\n==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the\nfastest way to clean each non-empty table and reset unique identifier\ncolumn of empty ones ====\n\nI wonder, what is the fastest way to accomplish this kind of task in\nPostgreSQL. I am interested in the fastest solutions ever possible.\n\nI found myself such kind of solution for MySQL, it performs much faster\nthan just truncation of tables one by one. But anyway, I am interested in\nthe fastest solutions for MySQL too. See my result here, of course it it\nfor MySQL only: https://github.com/bmabey/database_cleaner/issues/126\n\nI have following assumptions:\n\n I have 30-100 tables. Let them be 30.\n\n Half of the tables are empty.\n\n Each non-empty table has, say, no more than 100 rows. By this I mean,\ntables are NOT large.\n\n I need an optional possibility to exclude 2 or 5 or N tables from this\nprocedure.\n\n I cannot! use transactions.\n\nI need the fastest cleaning strategy for such case working on PostgreSQL\nboth 8 and 9.\n\nI see the following approaches:\n\n1) Truncate each table. It is too slow, I think, especially for empty\ntables.\n\n2) Check each table for emptiness by more faster method, and then if it is\nempty reset its unique identifier column (analog of AUTO_INCREMENT in\nMySQL) to initial state (1), i.e to restore its last_value from sequence\n(the same AUTO_INCREMENT analog) back to 1, otherwise run truncate on it.\n\nI use Ruby code to iterate through all tables, calling code below on each\nof them, I tried to setup SQL code running against each table like:\n\nDO $$DECLARE r record;\nBEGIN\n somehow_captured = SELECT last_value from #{table}_id_seq\n IF (somehow_captured == 1) THEN\n == restore initial unique identifier column value here ==\n END\n\n IF (somehow_captured > 1) THEN\n TRUNCATE TABLE #{table};\n END IF;\nEND$$;\n\nManipulating this code in various aspects, I couldn't make it work, because\nof I am unfamiliar with PostgreSQL functions and blocks (and variables).\n\nAlso my guess was that EXISTS(SELECT something FROM TABLE) could somehow be\nused to work good as one of the \"check procedure\" units, cleaning procedure\nshould consist of, but haven't accomplished it too.\n\nI would appreciate any hints on how this procedure could be accomplished in\nPostgreSQL native way.\n\nThanks!\n\nUPDATE:\n\nI need all this to run unit and integration tests for Ruby or Ruby on Rails\nprojects. Each test should have a clean DB before it runs, or to do a\ncleanup after itself (so called teardown). Transactions are very good, but\nthey become unusable when running tests against particular webdrivers, in\nmy case the switch to truncation strategy is needed. Once I updated that\nwith reference to RoR, please do not post here the answers about\n\"Obviously, you need DatabaseCleaner for PG\" and so on and so on.\n\n==== post ends ====\n\nThanks,\n\nStanislaw.\n\nHello, My question below is almost exact copy of the on on SO: http://stackoverflow.com/questions/11311079/postgresql-db-30-tables-with-number-of-rows-100-not-huge-the-fastest-way\nThe post on SO caused a few answers, all as one stating \"DO ONLY TRUNCATION - this is the fast\".Also I think I've met some amount of misunderstanding of what exactly do I want. I would appreciate it great, if you try, as people whom I may trust in performance question.\nHere goes the SO subject, formulating exact task I want to accomplish, this procedure is intended to be run beetween after or before each test, ensure database is cleaned enough and has reset unique identifiers column (User.id of the first User should be nor the number left from previous test in a test suite but 1). Here goes the message:\n==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones ====I wonder, what is the fastest way to accomplish this kind of task in PostgreSQL. I am interested in the fastest solutions ever possible.\nI found myself such kind of solution for MySQL, it performs much faster than just truncation of tables one by one. But anyway, I am interested in the fastest solutions for MySQL too. See my result here, of course it it for MySQL only: https://github.com/bmabey/database_cleaner/issues/126\nI have following assumptions: I have 30-100 tables. Let them be 30. Half of the tables are empty. Each non-empty table has, say, no more than 100 rows. By this I mean, tables are NOT large.\n I need an optional possibility to exclude 2 or 5 or N tables from this procedure. I cannot! use transactions.I need the fastest cleaning strategy for such case working on PostgreSQL both 8 and 9.\nI see the following approaches:1) Truncate each table. It is too slow, I think, especially for empty tables.2) Check each table for emptiness by more faster method, and then if it is empty reset its unique identifier column (analog of AUTO_INCREMENT in MySQL) to initial state (1), i.e to restore its last_value from sequence (the same AUTO_INCREMENT analog) back to 1, otherwise run truncate on it. \nI use Ruby code to iterate through all tables, calling code below on each of them, I tried to setup SQL code running against each table like:DO $$DECLARE r record;BEGIN somehow_captured = SELECT last_value from #{table}_id_seq\n IF (somehow_captured == 1) THEN == restore initial unique identifier column value here == END IF (somehow_captured > 1) THEN TRUNCATE TABLE #{table}; END IF;END$$;Manipulating this code in various aspects, I couldn't make it work, because of I am unfamiliar with PostgreSQL functions and blocks (and variables).\nAlso my guess was that EXISTS(SELECT something FROM TABLE) could somehow be used to work good as one of the \"check procedure\" units, cleaning procedure should consist of, but haven't accomplished it too.\nI would appreciate any hints on how this procedure could be accomplished in PostgreSQL native way.Thanks!UPDATE:I need all this to run unit and integration tests for Ruby or Ruby on Rails projects. Each test should have a clean DB before it runs, or to do a cleanup after itself (so called teardown). Transactions are very good, but they become unusable when running tests against particular webdrivers, in my case the switch to truncation strategy is needed. Once I updated that with reference to RoR, please do not post here the answers about \"Obviously, you need DatabaseCleaner for PG\" and so on and so on.\n==== post ends ====Thanks,Stanislaw.",
"msg_date": "Tue, 3 Jul 2012 18:22:43 +0300",
"msg_from": "Stanislaw Pankevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the\n\tfastest way to clean each non-empty table and reset unique identifier\n\tcolumn of empty ones."
},
{
"msg_contents": "On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:\n> I cannot! use transactions.\nEverything in PostgreSQL uses transactions, they are not optional.\n\nI'm assuming you mean you can't use explicit transaction demarcation, ie \nBEGIN and COMMIT.\n>\n> need the fastest cleaning strategy for such case working on \n> PostgreSQL both 8 and 9.\nJust so you know, there isn't really any \"PostgreSQL 8\" or \"PostgreSQL \n9\". Major versions are x.y, eg 8.4, 9.0, 9.1 and 9.2 are all distinct \nmajor versions. This is different to most software and IMO pretty damn \nannoying, but that's how it is.\n\n>\n> 1) Truncate each table. It is too slow, I think, especially for empty \n> tables.\nReally?!? TRUNCATE should be extremely fast, especially on empty tables.\n\nYou're aware that you can TRUNCATE many tables in one run, right?\n\nTRUNCATE TABLE a, b, c, d, e, f, g;\n\n>\n> 2) Check each table for emptiness by more faster method, and then if \n> it is empty reset its unique identifier column (analog of \n> AUTO_INCREMENT in MySQL) to initial state (1), i.e to restore its \n> last_value from sequence (the same AUTO_INCREMENT analog) back to 1, \n> otherwise run truncate on it.\nYou can examine the value of SELECT last_value FROM the_sequence ; \nthat's the equivalent of the MySQL hack you're using. To set it, use \n'setval(...)'.\n\nhttp://www.postgresql.org/docs/9.1/static/functions-sequence.html\n\n> I use Ruby code to iterate through all tables\n\nIf you want to be fast, get rid of iteration. Do it all in one query or \na couple of simple queries. Minimize the number of round-trips and queries.\n\nI'll be truly stunned if the fastest way isn't to just TRUNCATE all the \ntarget tables in a single statement (not iteratively one by one with \nseparate TRUNCATEs).\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/03/2012 11:22 PM, Stanislaw\n Pankevich wrote:\n\n I cannot! use transactions.\n\n Everything in PostgreSQL uses transactions, they are not optional.\n\n I'm assuming you mean you can't use explicit transaction\n demarcation, ie BEGIN and COMMIT.\n\n need the fastest cleaning strategy for such case working on\n PostgreSQL both 8 and 9.\n\n Just so you know, there isn't really any \"PostgreSQL 8\" or\n \"PostgreSQL 9\". Major versions are x.y, eg 8.4, 9.0, 9.1 and 9.2 are\n all distinct major versions. This is different to most software and\n IMO pretty damn annoying, but that's how it is.\n\n\n\n 1) Truncate each table. It is too slow, I think, especially for\n empty tables.\n\n Really?!? TRUNCATE should be extremely fast, especially on empty\n tables.\n\n You're aware that you can TRUNCATE many tables in one run, right?\n\n TRUNCATE TABLE a, b, c, d, e, f, g;\n\n\n 2) Check each table for emptiness by more faster method, and then\n if it is empty reset its unique identifier column (analog of\n AUTO_INCREMENT in MySQL) to initial state (1), i.e to restore its\n last_value from sequence (the same AUTO_INCREMENT analog) back to\n 1, otherwise run truncate on it.\n\n You can examine the value of SELECT last_value FROM the_sequence ;\n that's the equivalent of the MySQL hack you're using. To set it, use\n 'setval(...)'. \n\n\nhttp://www.postgresql.org/docs/9.1/static/functions-sequence.html\n\n\n I use Ruby code to iterate through all tables\n\n If you want to be fast, get rid of iteration. Do it all in one query\n or a couple of simple queries. Minimize the number of round-trips\n and queries.\n\n I'll be truly stunned if the fastest way isn't to just TRUNCATE all\n the target tables in a single statement (not iteratively one by one\n with separate TRUNCATEs).\n\n --\n Craig Ringer",
"msg_date": "Fri, 06 Jul 2012 19:29:21 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100\n\t(not huge) - the fastest way to clean each non-empty table and reset\n\tunique identifier column of empty ones."
},
{
"msg_contents": "On 07/06/2012 07:29 PM, Craig Ringer wrote:\n> On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:\n>> I cannot! use transactions.\n> Everything in PostgreSQL uses transactions, they are not optional.\n>\n> I'm assuming you mean you can't use explicit transaction demarcation, \n> ie BEGIN and COMMIT.\n>>\n>> need the fastest cleaning strategy for such case working on \n>> PostgreSQL both 8 and 9.\n> Just so you know, there isn't really any \"PostgreSQL 8\" or \"PostgreSQL \n> 9\". Major versions are x.y, eg 8.4, 9.0, 9.1 and 9.2 are all distinct \n> major versions. This is different to most software and IMO pretty damn \n> annoying, but that's how it is.\n>\n>>\n>> 1) Truncate each table. It is too slow, I think, especially for empty \n>> tables.\n> Really?!? TRUNCATE should be extremely fast, especially on empty tables.\n>\n> You're aware that you can TRUNCATE many tables in one run, right?\n>\n> TRUNCATE TABLE a, b, c, d, e, f, g;\n>\n>>\n>> 2) Check each table for emptiness by more faster method, and then if \n>> it is empty reset its unique identifier column (analog of \n>> AUTO_INCREMENT in MySQL) to initial state (1), i.e to restore its \n>> last_value from sequence (the same AUTO_INCREMENT analog) back to 1, \n>> otherwise run truncate on it.\n> You can examine the value of SELECT last_value FROM the_sequence ; \n> that's the equivalent of the MySQL hack you're using. To set it, use \n> 'setval(...)'.\n>\n> http://www.postgresql.org/docs/9.1/static/functions-sequence.html\n>\n>> I use Ruby code to iterate through all tables\n>\n> If you want to be fast, get rid of iteration. Do it all in one query \n> or a couple of simple queries. Minimize the number of round-trips and \n> queries.\n>\n> I'll be truly stunned if the fastest way isn't to just TRUNCATE all \n> the target tables in a single statement (not iteratively one by one \n> with separate TRUNCATEs).\n\nOh, also, you can setval(...) a bunch of sequences at once:\n\nSELECT\n setval('first_seq', 0),\n setval('second_seq', 0),\n setval('third_seq', 0),\n setval('fouth_seq', 0);\n\n... etc. You should only need two statements, fast ones, to reset your \nDB to the default state.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/06/2012 07:29 PM, Craig Ringer\n wrote:\n\n\n\nOn 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:\n\n I cannot! use transactions.\n\n Everything in PostgreSQL uses transactions, they are not optional.\n\n I'm assuming you mean you can't use explicit transaction\n demarcation, ie BEGIN and COMMIT.\n\n need the fastest cleaning strategy for such case working on\n PostgreSQL both 8 and 9.\n\n Just so you know, there isn't really any \"PostgreSQL 8\" or\n \"PostgreSQL 9\". Major versions are x.y, eg 8.4, 9.0, 9.1 and 9.2\n are all distinct major versions. This is different to most\n software and IMO pretty damn annoying, but that's how it is.\n\n \n 1) Truncate each table. It is too slow, I think, especially for\n empty tables.\n\n Really?!? TRUNCATE should be extremely fast, especially on empty\n tables.\n\n You're aware that you can TRUNCATE many tables in one run, right?\n\n TRUNCATE TABLE a, b, c, d, e, f, g;\n\n\n 2) Check each table for emptiness by more faster method, and\n then if it is empty reset its unique identifier column (analog\n of AUTO_INCREMENT in MySQL) to initial state (1), i.e to restore\n its last_value from sequence (the same AUTO_INCREMENT analog)\n back to 1, otherwise run truncate on it.\n\n You can examine the value of SELECT last_value FROM the_sequence ;\n that's the equivalent of the MySQL hack you're using. To set it,\n use 'setval(...)'. \n\nhttp://www.postgresql.org/docs/9.1/static/functions-sequence.html\n\n I use Ruby code to iterate through all tables\n\n If you want to be fast, get rid of iteration. Do it all in one\n query or a couple of simple queries. Minimize the number of\n round-trips and queries.\n\n I'll be truly stunned if the fastest way isn't to just TRUNCATE\n all the target tables in a single statement (not iteratively one\n by one with separate TRUNCATEs).\n\n\n Oh, also, you can setval(...) a bunch of sequences at once:\n\n SELECT\n setval('first_seq', 0),\n setval('second_seq', 0),\n setval('third_seq', 0),\n setval('fouth_seq', 0);\n\n ... etc. You should only need two statements, fast ones, to reset\n your DB to the default state.\n\n --\n Craig Ringer",
"msg_date": "Fri, 06 Jul 2012 19:35:07 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100\n\t(not huge) - the fastest way to clean each non-empty table and reset\n\tunique identifier column of empty ones."
},
{
"msg_contents": "On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer <[email protected]> wrote:\n> 1) Truncate each table. It is too slow, I think, especially for empty\n> tables.\n>\n> Really?!? TRUNCATE should be extremely fast, especially on empty tables.\n>\n> You're aware that you can TRUNCATE many tables in one run, right?\n>\n> TRUNCATE TABLE a, b, c, d, e, f, g;\n\nI have seen in \"trivial\" cases -- in terms of data size -- where\nTRUNCATE is much slower than a full-table DELETE. The most common use\ncase for that is rapid setup/teardown of tests, where it can add up\nquite quickly and in a very big way. This is probably an artifact the\nspeed of one's file system to truncate and/or unlink everything.\n\nI haven't tried a multi-truncate though. Still, I don't know a\nmechanism besides slow file system truncation time that would explain\nwhy DELETE would be significantly faster.\n\n-- \nfdr\n",
"msg_date": "Fri, 6 Jul 2012 04:38:56 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not\n\thuge) - the fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
},
{
"msg_contents": "Thanks for the answer.\n\nPlease, see my answers below:\n\nOn Fri, Jul 6, 2012 at 2:35 PM, Craig Ringer <[email protected]> wrote:\n> On 07/06/2012 07:29 PM, Craig Ringer wrote:\n>\n> On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:\n>\n> I cannot! use transactions.\n>\n> Everything in PostgreSQL uses transactions, they are not optional.\n>\n> I'm assuming you mean you can't use explicit transaction demarcation, ie\n> BEGIN and COMMIT.\n\nYes, right!\n\n> need the fastest cleaning strategy for such case working on PostgreSQL both\n> 8 and 9.\n\n> Just so you know, there isn't really any \"PostgreSQL 8\" or \"PostgreSQL 9\".\n> Major versions are x.y, eg 8.4, 9.0, 9.1 and 9.2 are all distinct major\n> versions. This is different to most software and IMO pretty damn annoying,\n> but that's how it is.\n\nYes, right! I've meant \"queries as much universal across different\nversions as possible\" by saying this.\n\n>\n> 1) Truncate each table. It is too slow, I think, especially for empty\n> tables.\n>\n> Really?!? TRUNCATE should be extremely fast, especially on empty tables.\n>\n> You're aware that you can TRUNCATE many tables in one run, right?\n>\n> TRUNCATE TABLE a, b, c, d, e, f, g;\n\nYES, I know it ;) and I use this option!\n\n> 2) Check each table for emptiness by more faster method, and then if it is\n> empty reset its unique identifier column (analog of AUTO_INCREMENT in MySQL)\n> to initial state (1), i.e to restore its last_value from sequence (the same\n> AUTO_INCREMENT analog) back to 1, otherwise run truncate on it.\n>\n> You can examine the value of SELECT last_value FROM the_sequence ;\n\nI tried using last_value, but somehow, it was equal 1, for table with\n0 rows, and for table with 1 rows, and began to increment only after\nrows > 1! This seemed very strange to me, but I ensured it working\nthis way by many times running my test script. Because of this, I am\nusing SELECT currval.\n\n> that's\n> the equivalent of the MySQL hack you're using. To set it, use 'setval(...)'.\n>\n> http://www.postgresql.org/docs/9.1/static/functions-sequence.html\n>\n> I use Ruby code to iterate through all tables\n>\n>\n> If you want to be fast, get rid of iteration. Do it all in one query or a\n> couple of simple queries. Minimize the number of round-trips and queries.\n>\n> I'll be truly stunned if the fastest way isn't to just TRUNCATE all the\n> target tables in a single statement (not iteratively one by one with\n> separate TRUNCATEs).\n>\n>\n> Oh, also, you can setval(...) a bunch of sequences at once:\n>\n> SELECT\n> setval('first_seq', 0),\n> setval('second_seq', 0),\n> setval('third_seq', 0),\n> setval('fouth_seq', 0);\n> ... etc. You should only need two statements, fast ones, to reset your DB to\n> the default state.\n\nGood idea!\n\nCould please look at my latest results at\nhttps://github.com/stanislaw/truncate-vs-count? I think they are\nawesome for test oriented context.\n\nIn slower way, resetting ids I do SELECT currval('#{table}_id_seq');\nthen check whether it raises an error or > 0.\n\nIn a faster way, just checking for a number of rows, for each table I do:\nat_least_one_row = execute(<<-TR\n SELECT true FROM #{table} LIMIT 1;\nTR\n)\n\nIf there is at least one row, I add this table to the list of\ntables_to_truncate.\nFinally I run multiple truncate: TRUNCATE tables_to_truncate;\n\nThanks,\nStanislaw.\n",
"msg_date": "Fri, 6 Jul 2012 16:25:15 +0300",
"msg_from": "Stanislaw Pankevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not\n\thuge) - the fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
},
{
"msg_contents": "Interesting catch, I will try to test the behavior of 'DELETE vs\nmultiple TRUNCATE'.\n\nI'll post it here, If I discover any amazing results.\n\nOn Fri, Jul 6, 2012 at 2:38 PM, Daniel Farina <[email protected]> wrote:\n> On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer <[email protected]> wrote:\n>> 1) Truncate each table. It is too slow, I think, especially for empty\n>> tables.\n>>\n>> Really?!? TRUNCATE should be extremely fast, especially on empty tables.\n>>\n>> You're aware that you can TRUNCATE many tables in one run, right?\n>>\n>> TRUNCATE TABLE a, b, c, d, e, f, g;\n>\n> I have seen in \"trivial\" cases -- in terms of data size -- where\n> TRUNCATE is much slower than a full-table DELETE. The most common use\n> case for that is rapid setup/teardown of tests, where it can add up\n> quite quickly and in a very big way. This is probably an artifact the\n> speed of one's file system to truncate and/or unlink everything.\n>\n> I haven't tried a multi-truncate though. Still, I don't know a\n> mechanism besides slow file system truncation time that would explain\n> why DELETE would be significantly faster.\n>\n> --\n> fdr\n",
"msg_date": "Fri, 6 Jul 2012 16:30:52 +0300",
"msg_from": "Stanislaw Pankevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not\n\thuge) - the fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
},
{
"msg_contents": "On 07/06/2012 07:38 PM, Daniel Farina wrote:\n> On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer <[email protected]> wrote:\n>> 1) Truncate each table. It is too slow, I think, especially for empty\n>> tables.\n>>\n>> Really?!? TRUNCATE should be extremely fast, especially on empty tables.\n>>\n>> You're aware that you can TRUNCATE many tables in one run, right?\n>>\n>> TRUNCATE TABLE a, b, c, d, e, f, g;\n> I have seen in \"trivial\" cases -- in terms of data size -- where\n> TRUNCATE is much slower than a full-table DELETE. The most common use\n> case for that is rapid setup/teardown of tests, where it can add up\n> quite quickly and in a very big way. This is probably an artifact the\n> speed of one's file system to truncate and/or unlink everything.\nThat makes some sense, actually. DELETEing from a table that has no \nforeign keys, triggers, etc while nothing else is accessing the table is \nfairly cheap and doesn't take much (any?) cleanup work afterwards. For \ntiny deletes I can easily see it being better than forcing the OS to \njournal a metadata change or two and a couple of fsync()s for a truncate.\n\nI suspect truncating many tables at once will prove a win over \niteratively DELETEing from many tables at once. I'd benchmark it except \nthat it's optimizing something I don't care about at all, and the \nresults would be massively dependent on the file system (ext3, ext4, \nxfs) and its journal configuration.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 06 Jul 2012 21:38:44 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100\n\t(not huge) - the fastest way to clean each non-empty table and reset\n\tunique identifier column of empty ones."
},
{
"msg_contents": "Stanislaw Pankevich wrote:\r\n> ==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each\r\n> non-empty table and reset unique identifier column of empty ones ====\r\n> \r\n> I wonder, what is the fastest way to accomplish this kind of task in PostgreSQL. I am interested in\r\n> the fastest solutions ever possible.\r\n\r\n> I have following assumptions:\r\n> \r\n> I have 30-100 tables. Let them be 30.\r\n> \r\n> Half of the tables are empty.\r\n> \r\n> Each non-empty table has, say, no more than 100 rows. By this I mean, tables are NOT large.\r\n> \r\n> I need an optional possibility to exclude 2 or 5 or N tables from this procedure.\r\n> \r\n> I cannot! use transactions.\r\n\r\nWhy? That would definitely speed up everything.\r\n\r\n> I need the fastest cleaning strategy for such case working on PostgreSQL both 8 and 9.\r\n> \r\n> I see the following approaches:\r\n> \r\n> 1) Truncate each table. It is too slow, I think, especially for empty tables.\r\n\r\nDid you actually try it? That's the king's way to performance questions!\r\nTruncating a single table is done in a matter of microseconds, particularly\r\nif it is not big.\r\nDo you have tens of thousands of tables?\r\n\r\n> 2) Check each table for emptiness by more faster method, and then if it is empty reset its unique\r\n> identifier column (analog of AUTO_INCREMENT in MySQL) to initial state (1), i.e to restore its\r\n> last_value from sequence (the same AUTO_INCREMENT analog) back to 1, otherwise run truncate on it.\r\n\r\nThat seems fragile an won't work everywhere.\r\n\r\nWhat if the table has no primary key with a DEFAULT that uses a sequence?\r\nWhat if it has such a key, but the DEFAULT was not used for an INSERT?\r\nWhat if somebody manually reset the sequence?\r\n\r\nBesides, how do you find out what the sequence for a table's primary key\r\nis? With a SELECT, I guess. That SELECT is probably not faster than\r\na simple TRUNCATE.\r\n\r\n> Also my guess was that EXISTS(SELECT something FROM TABLE) could somehow be used to work good as one\r\n> of the \"check procedure\" units, cleaning procedure should consist of, but haven't accomplished it too.\r\n\r\nYou could of course run a SELECT 1 FROM table LIMIT 1, but again I don't\r\nthink that this will be considerably faster than just truncating the table.\r\n\r\n> I would appreciate any hints on how this procedure could be accomplished in PostgreSQL native way.\r\n> \r\n> Thanks!\r\n> \r\n> UPDATE:\r\n> \r\n> I need all this to run unit and integration tests for Ruby or Ruby on Rails projects. Each test should\r\n> have a clean DB before it runs, or to do a cleanup after itself (so called teardown). Transactions are\r\n> very good, but they become unusable when running tests against particular webdrivers, in my case the\r\n> switch to truncation strategy is needed. Once I updated that with reference to RoR, please do not post\r\n> here the answers about \"Obviously, you need DatabaseCleaner for PG\" and so on and so on.\r\n\r\nI completely fail to understand what you talk about here.\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Fri, 6 Jul 2012 15:39:03 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db,\n\t30 tables with number of rows < 100 (not huge) - the fastest way to\n\tclean each non-empty table and reset unique identifier column\n\tof empty ones."
},
{
"msg_contents": "On Fri, Jul 6, 2012 at 4:38 PM, Craig Ringer <[email protected]> wrote:\n> On 07/06/2012 07:38 PM, Daniel Farina wrote:\n>>\n>> On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer <[email protected]>\n>> wrote:\n>>>\n>>> 1) Truncate each table. It is too slow, I think, especially for empty\n>>> tables.\n>>>\n>>> Really?!? TRUNCATE should be extremely fast, especially on empty tables.\n>>>\n>>> You're aware that you can TRUNCATE many tables in one run, right?\n>>>\n>>> TRUNCATE TABLE a, b, c, d, e, f, g;\n>>\n>> I have seen in \"trivial\" cases -- in terms of data size -- where\n>> TRUNCATE is much slower than a full-table DELETE. The most common use\n>> case for that is rapid setup/teardown of tests, where it can add up\n>> quite quickly and in a very big way. This is probably an artifact the\n>> speed of one's file system to truncate and/or unlink everything.\n>\n> That makes some sense, actually. DELETEing from a table that has no foreign\n> keys, triggers, etc while nothing else is accessing the table is fairly\n> cheap and doesn't take much (any?) cleanup work afterwards. For tiny deletes\n> I can easily see it being better than forcing the OS to journal a metadata\n> change or two and a couple of fsync()s for a truncate.\n>\n> I suspect truncating many tables at once will prove a win over iteratively\n> DELETEing from many tables at once. I'd benchmark it except that it's\n> optimizing something I don't care about at all, and the results would be\n> massively dependent on the file system (ext3, ext4, xfs) and its journal\n> configuration.\n\nQuestion:\nIs there a possibility in PostgreSQL to do DELETE on many tables\nmassively, like TRUNCATE allows. Like DELETE table1, table2, ...?\n",
"msg_date": "Fri, 6 Jul 2012 16:45:53 +0300",
"msg_from": "Stanislaw Pankevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not\n\thuge) - the fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
},
{
"msg_contents": "On Fri, Jul 6, 2012 at 4:39 PM, Albe Laurenz <[email protected]> wrote:\n> Stanislaw Pankevich wrote:\n>> ==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each\n>> non-empty table and reset unique identifier column of empty ones ====\n>>\n>> I wonder, what is the fastest way to accomplish this kind of task in PostgreSQL. I am interested in\n>> the fastest solutions ever possible.\n>\n>> I have following assumptions:\n>>\n>> I have 30-100 tables. Let them be 30.\n>>\n>> Half of the tables are empty.\n>>\n>> Each non-empty table has, say, no more than 100 rows. By this I mean, tables are NOT large.\n>>\n>> I need an optional possibility to exclude 2 or 5 or N tables from this procedure.\n>>\n>> I cannot! use transactions.\n>\n> Why? That would definitely speed up everything.\nIt is because of specifics of Ruby or the Rails testing environment,\nwhen running tests again webdriver, which uses its own connection\nseparate from one, which test suite itself uses. Transactions are\ngreat, but not for all cases.\n\n>> I need the fastest cleaning strategy for such case working on PostgreSQL both 8 and 9.\n>>\n>> I see the following approaches:\n>>\n>> 1) Truncate each table. It is too slow, I think, especially for empty tables.\n>\n> Did you actually try it? That's the king's way to performance questions!\n> Truncating a single table is done in a matter of microseconds, particularly\n> if it is not big.\n> Do you have tens of thousands of tables?\n\nActually, 10-100 tables.\n\n>> 2) Check each table for emptiness by more faster method, and then if it is empty reset its unique\n>> identifier column (analog of AUTO_INCREMENT in MySQL) to initial state (1), i.e to restore its\n>> last_value from sequence (the same AUTO_INCREMENT analog) back to 1, otherwise run truncate on it.\n>\n> That seems fragile an won't work everywhere.\n>\n> What if the table has no primary key with a DEFAULT that uses a sequence?\n> What if it has such a key, but the DEFAULT was not used for an INSERT?\n> What if somebody manually reset the sequence?\n\nI'm using currval in my latest code.\n\n> Besides, how do you find out what the sequence for a table's primary key\n> is? With a SELECT, I guess. That SELECT is probably not faster than\n> a simple TRUNCATE.\n>\n>> Also my guess was that EXISTS(SELECT something FROM TABLE) could somehow be used to work good as one\n>> of the \"check procedure\" units, cleaning procedure should consist of, but haven't accomplished it too.\n>\n> You could of course run a SELECT 1 FROM table LIMIT 1, but again I don't\n> think that this will be considerably faster than just truncating the table.\n\nExactly this query is much faster, believe me. You can see my latest\nresults on https://github.com/stanislaw/truncate-vs-count.\n\n>> I need all this to run unit and integration tests for Ruby or Ruby on Rails projects. Each test should\n>> have a clean DB before it runs, or to do a cleanup after itself (so called teardown). Transactions are\n>> very good, but they become unusable when running tests against particular webdrivers, in my case the\n>> switch to truncation strategy is needed. Once I updated that with reference to RoR, please do not post\n>> here the answers about \"Obviously, you need DatabaseCleaner for PG\" and so on and so on.\n>\n> I completely fail to understand what you talk about here.\nYes, I know it is very Ruby and Ruby on Rails specific. But I tried to\nmake my question clear and abstract enough, to be understandable\nwithout the context it was originally drawn from.\n\nThanks.\n",
"msg_date": "Fri, 6 Jul 2012 16:51:51 +0300",
"msg_from": "Stanislaw Pankevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not\n\thuge) - the fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
},
{
"msg_contents": "On Friday, July 06, 2012 01:38:56 PM Daniel Farina wrote:\n> ll, I don't know a\n> mechanism besides slow file system truncation time that would explain\n> why DELETE would be significantly faster.\nThere is no filesystem truncation happening. The heap and the indexes get \nmapped into a new file. Otherwise rollback would be pretty hard to implement.\n\nI guess the biggest cost in a bigger cluster is the dropping the buffers that \nwere formerly mapped to that relation (DropRelFileNodeBuffers).\n\nAndres\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n",
"msg_date": "Fri, 6 Jul 2012 16:14:26 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db,\n\t30 tables with number of rows < 100 (not huge) - the fastest way to\n\tclean each non-empty table and reset unique identifier column\n\tof empty ones."
},
{
"msg_contents": "On 07/06/2012 09:45 PM, Stanislaw Pankevich wrote:\n\n> Question: Is there a possibility in PostgreSQL to do DELETE on many \n> tables massively, like TRUNCATE allows. Like DELETE table1, table2, ...? \n\nYes, you can do it with a writable common table expression, but you \nwanted version portability.\n\nWITH\n discard1 AS (DELETE FROM test1),\n discard2 AS (DELETE FROM test2 AS b)\nSELECT 1;\n\nNot only will this not work in older versions (IIRC it only works with \n9.1, maybe 9.0 too but I don't see it in the documentation for SELECT \nfor 9.0) but I find it hard to imagine any performance benefit over \nsimply sending\n\n DELETE FROM test1; DELETE FROM test2;\n\nThis all smells like premature optimisation of cases that don't matter. \nWhat problem are you solving with this?\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 06 Jul 2012 22:22:21 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100\n\t(not huge) - the fastest way to clean each non-empty table and reset\n\tunique identifier column of empty ones."
},
{
"msg_contents": "Stanislaw Pankevich wrote:\r\n>>> ==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each\r\n>>> non-empty table and reset unique identifier column of empty ones ====\r\n>>>\r\n>>> I wonder, what is the fastest way to accomplish this kind of task in PostgreSQL. I am interested in\r\n>>> the fastest solutions ever possible.\r\n\r\n>>> I need the fastest cleaning strategy for such case working on PostgreSQL both 8 and 9.\r\n>>>\r\n>>> I see the following approaches:\r\n>>>\r\n>>> 1) Truncate each table. It is too slow, I think, especially for empty tables.\r\n\r\n>> Did you actually try it? That's the king's way to performance questions!\r\n>> Truncating a single table is done in a matter of microseconds, particularly\r\n>> if it is not big.\r\n>> Do you have tens of thousands of tables?\r\n\r\n> Actually, 10-100 tables.\r\n\r\n>> You could of course run a SELECT 1 FROM table LIMIT 1, but again I don't\r\n>> think that this will be considerably faster than just truncating the table.\r\n> \r\n> Exactly this query is much faster, believe me. You can see my latest\r\n> results on https://github.com/stanislaw/truncate-vs-count.\r\n\r\nOk, I believe you.\r\n\r\nMy quick tests showed that a sible truncate (including transaction and\r\nclient-server roundtrip via UNIX sockets takes some 10 to 30 milliseconds.\r\n\r\nMultiply that with 100, and you end up with just a few seconds at most.\r\nOr what did you measure?\r\n\r\nI guess you run that deletion very often so that it is painful.\r\n\r\nStill I think that the biggest performance gain is to be had by using\r\nPostgreSQL's features (truncate several tables in one statement, ...).\r\n\r\nTry to bend your Ruby framework!\r\n\r\nYours,\r\nLaurenz Albe\r\n",
"msg_date": "Fri, 6 Jul 2012 16:46:05 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db,\n\t30 tables with number of rows < 100 (not huge) - the fastest way to\n\tclean each non-empty table and reset unique identifier column\n\tof empty ones."
},
{
"msg_contents": "Stanislaw Pankevich wrote:\n>>> ==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each\n>>> non-empty table and reset unique identifier column of empty ones ====\n\nHello, \n\n2 'exotic' ideas:\n\n- use dblink_send_query to do the job in multiple threads (I doubt this really could be faster)\n- have prepared empty tables in a separate schema, and a \"garbage schema\":\n\n ALTER TABLE x set schema garbage;\n ALTER TABLE prepared.x set schema \"current\";\n\nyou should be ready for the next test, \n\nbut still have to clean garbage nad moved to prepared for the next but one in the background....\n\nbest regards,\n\nMarc Mamin\n\n\n\n\n\n>>>\n>>> I wonder, what is the fastest way to accomplish this kind of task in PostgreSQL. I am interested in\n>>> the fastest solutions ever possible.\n\n>>> I need the fastest cleaning strategy for such case working on PostgreSQL both 8 and 9.\n>>>\n>>> I see the following approaches:\n>>>\n>>> 1) Truncate each table. It is too slow, I think, especially for empty tables.\n\n>> Did you actually try it? That's the king's way to performance questions!\n>> Truncating a single table is done in a matter of microseconds, particularly\n>> if it is not big.\n>> Do you have tens of thousands of tables?\n\n> Actually, 10-100 tables.\n\n>> You could of course run a SELECT 1 FROM table LIMIT 1, but again I don't\n>> think that this will be considerably faster than just truncating the table.\n> \n> Exactly this query is much faster, believe me. You can see my latest\n> results on https://github.com/stanislaw/truncate-vs-count.\n\nOk, I believe you.\n\nMy quick tests showed that a sible truncate (including transaction and\nclient-server roundtrip via UNIX sockets takes some 10 to 30 milliseconds.\n\nMultiply that with 100, and you end up with just a few seconds at most.\nOr what did you measure?\n\nI guess you run that deletion very often so that it is painful.\n\nStill I think that the biggest performance gain is to be had by using\nPostgreSQL's features (truncate several tables in one statement, ...).\n\nTry to bend your Ruby framework!\n\nYours,\nLaurenz Albe\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\nAW: [PERFORM] PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.\n\n\n\n\n\n\nStanislaw Pankevich wrote:\n>>> ==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each\n>>> non-empty table and reset unique identifier column of empty ones ====\n\nHello,\n\n2 'exotic' ideas:\n\n- use dblink_send_query to do the job in multiple threads (I doubt this really could be faster)\n- have prepared empty tables in a separate schema, and a \"garbage schema\":\n\n ALTER TABLE x set schema garbage;\n ALTER TABLE prepared.x set schema \"current\";\n\nyou should be ready for the next test,\n\nbut still have to clean garbage nad moved to prepared for the next but one in the background....\n\nbest regards,\n\nMarc Mamin\n\n\n\n\n\n>>>\n>>> I wonder, what is the fastest way to accomplish this kind of task in PostgreSQL. I am interested in\n>>> the fastest solutions ever possible.\n\n>>> I need the fastest cleaning strategy for such case working on PostgreSQL both 8 and 9.\n>>>\n>>> I see the following approaches:\n>>>\n>>> 1) Truncate each table. It is too slow, I think, especially for empty tables.\n\n>> Did you actually try it? That's the king's way to performance questions!\n>> Truncating a single table is done in a matter of microseconds, particularly\n>> if it is not big.\n>> Do you have tens of thousands of tables?\n\n> Actually, 10-100 tables.\n\n>> You could of course run a SELECT 1 FROM table LIMIT 1, but again I don't\n>> think that this will be considerably faster than just truncating the table.\n>\n> Exactly this query is much faster, believe me. You can see my latest\n> results on https://github.com/stanislaw/truncate-vs-count.\n\nOk, I believe you.\n\nMy quick tests showed that a sible truncate (including transaction and\nclient-server roundtrip via UNIX sockets takes some 10 to 30 milliseconds.\n\nMultiply that with 100, and you end up with just a few seconds at most.\nOr what did you measure?\n\nI guess you run that deletion very often so that it is painful.\n\nStill I think that the biggest performance gain is to be had by using\nPostgreSQL's features (truncate several tables in one statement, ...).\n\nTry to bend your Ruby framework!\n\nYours,\nLaurenz Albe\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 6 Jul 2012 17:24:54 +0200",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db,\n\t30 tables with number of rows < 100 (not huge) - the fastest way to\n\tclean each non-empty table and reset unique identifier column\n\tof empty ones."
},
{
"msg_contents": "On Fri, Jul 6, 2012 at 5:22 PM, Craig Ringer <[email protected]> wrote:\n> On 07/06/2012 09:45 PM, Stanislaw Pankevich wrote:\n>\n>> Question: Is there a possibility in PostgreSQL to do DELETE on many tables\n>> massively, like TRUNCATE allows. Like DELETE table1, table2, ...?\n>\n>\n> Yes, you can do it with a writable common table expression, but you wanted\n> version portability.\n>\n> WITH\n> discard1 AS (DELETE FROM test1),\n> discard2 AS (DELETE FROM test2 AS b)\n> SELECT 1;\n>\n> Not only will this not work in older versions (IIRC it only works with 9.1,\n> maybe 9.0 too but I don't see it in the documentation for SELECT for 9.0)\n> but I find it hard to imagine any performance benefit over simply sending\n>\n> DELETE FROM test1; DELETE FROM test2;\n>\n> This all smells like premature optimisation of cases that don't matter. What\n> problem are you solving with this?\n\nI will write tests for both massive TRUNCATE and DELETE (DELETE\neach_table) for my case with Ruby testing environment, and let you\nknow about the results. For now, I think, I should go for massive\nTRUNCATE.\n",
"msg_date": "Fri, 6 Jul 2012 18:27:03 +0300",
"msg_from": "Stanislaw Pankevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not\n\thuge) - the fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
},
{
"msg_contents": "Marc, thanks for the answer.\n\nNa, these seem not to be enough universal and easy to hook into\nexisting truncation strategies used in Ruby world.\n\nOn Fri, Jul 6, 2012 at 6:24 PM, Marc Mamin <[email protected]> wrote:\n>\n>\n>\n> Stanislaw Pankevich wrote:\n>>>> ==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the\n>>>> fastest way to clean each\n>>>> non-empty table and reset unique identifier column of empty ones ====\n>\n> Hello,\n>\n> 2 'exotic' ideas:\n>\n> - use dblink_send_query to do the job in multiple threads (I doubt this\n> really could be faster)\n> - have prepared empty tables in a separate schema, and a \"garbage schema\":\n>\n> ALTER TABLE x set schema garbage;\n> ALTER TABLE prepared.x set schema \"current\";\n>\n> you should be ready for the next test,\n>\n> but still have to clean garbage nad moved to prepared for the next but one\n> in the background....\n>\n> best regards,\n>\n> Marc Mamin\n",
"msg_date": "Fri, 6 Jul 2012 18:44:34 +0300",
"msg_from": "Stanislaw Pankevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not\n\thuge) - the fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
},
{
"msg_contents": "On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer <[email protected]> wrote:\n> On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:\n\n> > 1) Truncate each table. It is too slow, I think, especially for empty\n> > tables.\n>\n> Really?!? TRUNCATE should be extremely fast, especially on empty tables.\n>\n> You're aware that you can TRUNCATE many tables in one run, right?\n>\n> TRUNCATE TABLE a, b, c, d, e, f, g;\n\nThis still calls DropRelFileNodeAllBuffers once for each table (and\neach index), even if the table is empty.\n\nWith large shared_buffers, this can be relatively slow.\n\nCheers,\n\nJeff\n",
"msg_date": "Fri, 6 Jul 2012 08:57:05 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not\n\thuge) - the fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
},
{
"msg_contents": "On 07/03/2012 08:22 AM, Stanislaw Pankevich wrote:\n>\n> ==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - \n> the fastest way to clean each non-empty table and reset unique \n> identifier column of empty ones ====\n>\n> I wonder, what is the fastest way to accomplish this kind of task in \n> PostgreSQL. I am interested in the fastest solutions ever possible.\n>\nIt would help if we really understood your use-case. If you want to \nfully reset your database to a known starting state for test runs, why \nnot just have a base database initialized exactly as you wish, say \n\"test_base\", then just drop your test database and create the new \ndatabase from your template:\ndrop database test;\ncreate database test template test_base;\n\nThis should be very fast but it won't allow you to exclude individual \ntables.\n\nAre you interested in absolute fastest as a mind-game or is there a \nspecific use requirement, i.e. how fast is fast enough? This is the \nbasic starting point for tuning, hardware selection, etc.\n\nTruncate should be extremely fast but on tables that are as tiny as \nyours the difference may not be visible to an end-user. I just tried a \n\"delete from\" to empty a 10,000 record table and it took 14 milliseconds \nso you could do your maximum of 100 tables each containing 10-times your \nmax number of records in less than two seconds.\n\nRegardless of the method you choose, you need to be sure that nobody is \naccessing the database when you reset it. The drop/create database \nmethod will, of course, require and enforce that. Truncate requires an \nexclusive lock so it may appear to be very slow if it is waiting to get \nthat lock. And even if you don't have locking issues, your reluctance to \nwrap your reset code in transactions means that a client could be \nupdating some table or tables whenever the reset script isn't actively \nworking on that same table leading to unexplained weird test results.\n\nCheers,\nSteve\n\n",
"msg_date": "Fri, 06 Jul 2012 09:06:47 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100\n\t(not huge) - the fastest way to clean each non-empty table and reset\n\tunique identifier column of empty ones."
},
{
"msg_contents": "\nDaniel Farina-4 wrote\n> \n> On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer <[email protected]> wrote:\n>> 1) Truncate each table. It is too slow, I think, especially for empty\n>> tables.\n>>\n>> Really?!? TRUNCATE should be extremely fast, especially on empty tables.\n>>\n>> You're aware that you can TRUNCATE many tables in one run, right?\n>>\n>> TRUNCATE TABLE a, b, c, d, e, f, g;\n> \n> I have seen in \"trivial\" cases -- in terms of data size -- where\n> TRUNCATE is much slower than a full-table DELETE. The most common use\n> case for that is rapid setup/teardown of tests, where it can add up\n> quite quickly and in a very big way. This is probably an artifact the\n> speed of one's file system to truncate and/or unlink everything.\n> \n> I haven't tried a multi-truncate though. Still, I don't know a\n> mechanism besides slow file system truncation time that would explain\n> why DELETE would be significantly faster.\n> \n> -- \n> fdr\n> \n> -- \n> Sent via pgsql-performance mailing list (pgsql-performance@)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\nThat's my experience - I have a set of regression tests that clean the\ndatabase (deletes everything from a single parent table and lets the\nreferential integrity checks cascade to delete five other tables) at the end\nof each test run, and it can complete 90 tests (including 90 mass deletes)\nin a little over five seconds. If I replace that simple delete with a\ntruncation of all six tables at once, my test run balloons to 42 seconds.\n\nI run my development database with synchronous_commit = off, though, so I\nguess TRUNCATE has to hit the disk while the mass delete doesn't.\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/PostgreSQL-db-30-tables-with-number-of-rows-100-not-huge-the-fastest-way-to-clean-each-non-empty-tab-tp5715643p5715734.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Fri, 6 Jul 2012 11:32:25 -0700 (PDT)",
"msg_from": "Chris Hanks <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not huge) -\n\tthe fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
},
{
"msg_contents": "If someone is interested with the current strategy, I am using for\nthis, see this Ruby-based repo\nhttps://github.com/stanislaw/truncate-vs-count for both MySQL and\nPostgreSQL.\n\nMySQL: the fastest strategy for cleaning databases is truncation with\nfollowing modifications:\n1) We check is table is not empty and then truncate.\n2) If table is empty, we check if AUTO_INCREMENT was changed. If it\nwas, we do a truncate.\n\nFor MySQL just truncation is much faster than just deletion. The only\ncase where DELETE wins TRUNCATE is doing it on empty table.\nFor MySQL truncation with empty checks is much faster than just\nmultiple truncation.\nFor MySQL deletion with empty checks is much faster than just DELETE\non each tables.\n\nPostgreSQL: The fastest strategy for cleaning databases is deletion\nwith the same modifications.\n\nFor PostgreSQL just deletion is much faster than just TRUNCATION(even multiple).\nFor PostgreSQL multiple TRUNCATE doing empty checks before is slightly\nfaster than just multiple TRUNCATE\nFor PostgreSQL deletion with empty checks is slightly faster than just\nPostgreSQL deletion.\n\nThis is from where it began:\nhttps://github.com/bmabey/database_cleaner/issues/126\nThis is the result code and long discussion:\nhttps://github.com/bmabey/database_cleaner/issues/126\n\nWe began collecting users feedback proving my idea with first checking\nempty tables is right.\n\nThanks to all participants, especially those who've suggested trying\nDELETE as well as optimizing TRUNCATE.\n\nStanislaw\n\nOn Fri, Jul 6, 2012 at 7:06 PM, Steve Crawford\n<[email protected]> wrote:\n> On 07/03/2012 08:22 AM, Stanislaw Pankevich wrote:\n>>\n>>\n>> ==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the\n>> fastest way to clean each non-empty table and reset unique identifier column\n>> of empty ones ====\n>>\n>> I wonder, what is the fastest way to accomplish this kind of task in\n>> PostgreSQL. I am interested in the fastest solutions ever possible.\n>>\n> It would help if we really understood your use-case. If you want to fully\n> reset your database to a known starting state for test runs, why not just\n> have a base database initialized exactly as you wish, say \"test_base\", then\n> just drop your test database and create the new database from your template:\n> drop database test;\n> create database test template test_base;\n>\n> This should be very fast but it won't allow you to exclude individual\n> tables.\n>\n> Are you interested in absolute fastest as a mind-game or is there a specific\n> use requirement, i.e. how fast is fast enough? This is the basic starting\n> point for tuning, hardware selection, etc.\n>\n> Truncate should be extremely fast but on tables that are as tiny as yours\n> the difference may not be visible to an end-user. I just tried a \"delete\n> from\" to empty a 10,000 record table and it took 14 milliseconds so you\n> could do your maximum of 100 tables each containing 10-times your max number\n> of records in less than two seconds.\n>\n> Regardless of the method you choose, you need to be sure that nobody is\n> accessing the database when you reset it. The drop/create database method\n> will, of course, require and enforce that. Truncate requires an exclusive\n> lock so it may appear to be very slow if it is waiting to get that lock. And\n> even if you don't have locking issues, your reluctance to wrap your reset\n> code in transactions means that a client could be updating some table or\n> tables whenever the reset script isn't actively working on that same table\n> leading to unexplained weird test results.\n>\n> Cheers,\n> Steve\n>\n",
"msg_date": "Fri, 13 Jul 2012 10:50:53 +0300",
"msg_from": "Stanislaw Pankevich <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not\n\thuge) - the fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
},
{
"msg_contents": "On 07/13/2012 03:50 PM, Stanislaw Pankevich wrote:\n> MySQL: the fastest strategy for cleaning databases is truncation with\n> following modifications:\n> 1) We check is table is not empty and then truncate.\n> 2) If table is empty, we check if AUTO_INCREMENT was changed. If it\n> was, we do a truncate.\n>\n> For MySQL just truncation is much faster than just deletion.\nYou're talking about MySQL like it's only one database. Is this with \nMyISAM tables? InnoDB? Something else? I don't see any mention of table \nformats in a very quick skim of the discussion you linked to.\n\nPostgreSQL will /never/ be able to compete with MyISAM on raw speed of \nsmall, simple operations. There might things that can be made faster \nthan they are right now, but I really doubt it'll ever surpass MyISAM.\n\nMy mental analogy is asking an abseiler, who is busy clipping in and \ntesting their gear at the top of a bridge, why they aren't at the bottom \nof the canyon with the BASE jumper yet.\n\nThe BASE jumper will always get there faster, but the abseiler will \nalways get there alive.\n\nIf you're talking about InnoDB or another durable, reliable table \nstructure then I'd be interested in the mechanics of what MySQL's \ntruncates are doing.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/13/2012 03:50 PM, Stanislaw\n Pankevich wrote:\n\n\nMySQL: the fastest strategy for cleaning databases is truncation with\nfollowing modifications:\n1) We check is table is not empty and then truncate.\n2) If table is empty, we check if AUTO_INCREMENT was changed. If it\nwas, we do a truncate.\n\nFor MySQL just truncation is much faster than just deletion. \n\n You're talking about MySQL like it's only one database. Is this with\n MyISAM tables? InnoDB? Something else? I don't see any mention of\n table formats in a very quick skim of the discussion you linked to.\n\n PostgreSQL will never be able to compete with MyISAM on raw\n speed of small, simple operations. There might things that can be\n made faster than they are right now, but I really doubt it'll ever\n surpass MyISAM.\n\n My mental analogy is asking an abseiler, who is busy clipping in and\n testing their gear at the top of a bridge, why they aren't at the\n bottom of the canyon with the BASE jumper yet.\n\n The BASE jumper will always get there faster, but the abseiler will\n always get there alive.\n\n If you're talking about InnoDB or another durable, reliable table\n structure then I'd be interested in the mechanics of what MySQL's\n truncates are doing.\n\n --\n Craig Ringer",
"msg_date": "Sat, 14 Jul 2012 11:35:27 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100\n\t(not huge) - the fastest way to clean each non-empty table and reset\n\tunique identifier column of empty ones."
},
{
"msg_contents": "On Tue, Jul 3, 2012 at 10:22 AM, Stanislaw Pankevich\n<[email protected]> wrote:\n> Hello,\n>\n> My question below is almost exact copy of the on on SO:\n> http://stackoverflow.com/questions/11311079/postgresql-db-30-tables-with-number-of-rows-100-not-huge-the-fastest-way\n>\n> The post on SO caused a few answers, all as one stating \"DO ONLY TRUNCATION\n> - this is the fast\".\n>\n> Also I think I've met some amount of misunderstanding of what exactly do I\n> want. I would appreciate it great, if you try, as people whom I may trust in\n> performance question.\n>\n> Here goes the SO subject, formulating exact task I want to accomplish, this\n> procedure is intended to be run beetween after or before each test, ensure\n> database is cleaned enough and has reset unique identifiers column (User.id\n> of the first User should be nor the number left from previous test in a test\n> suite but 1). Here goes the message:\n>\n> ==== PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the\n> fastest way to clean each non-empty table and reset unique identifier column\n> of empty ones ====\n>\n> I wonder, what is the fastest way to accomplish this kind of task in\n> PostgreSQL. I am interested in the fastest solutions ever possible.\n>\n> I found myself such kind of solution for MySQL, it performs much faster than\n> just truncation of tables one by one. But anyway, I am interested in the\n> fastest solutions for MySQL too. See my result here, of course it it for\n> MySQL only: https://github.com/bmabey/database_cleaner/issues/126\n>\n> I have following assumptions:\n>\n> I have 30-100 tables. Let them be 30.\n>\n> Half of the tables are empty.\n>\n> Each non-empty table has, say, no more than 100 rows. By this I mean,\n> tables are NOT large.\n>\n> I need an optional possibility to exclude 2 or 5 or N tables from this\n> procedure.\n>\n> I cannot! use transactions.\n>\n> I need the fastest cleaning strategy for such case working on PostgreSQL\n> both 8 and 9.\n>\n> I see the following approaches:\n>\n> 1) Truncate each table. It is too slow, I think, especially for empty\n> tables.\n>\n> 2) Check each table for emptiness by more faster method, and then if it is\n> empty reset its unique identifier column (analog of AUTO_INCREMENT in MySQL)\n> to initial state (1), i.e to restore its last_value from sequence (the same\n> AUTO_INCREMENT analog) back to 1, otherwise run truncate on it.\n>\n> I use Ruby code to iterate through all tables, calling code below on each of\n> them, I tried to setup SQL code running against each table like:\n>\n> DO $$DECLARE r record;\n> BEGIN\n> somehow_captured = SELECT last_value from #{table}_id_seq\n> IF (somehow_captured == 1) THEN\n> == restore initial unique identifier column value here ==\n> END\n>\n> IF (somehow_captured > 1) THEN\n> TRUNCATE TABLE #{table};\n> END IF;\n> END$$;\n\nThis didn't work because you can't use variables for table names in\nnon-dynamic (that is, executed as a string) statements. You'd probably\nwant:\n\nEXECUTE 'TRUNCATE TABLE ' || #{table};\n\nAs to performance, TRUNCATE in postgres (just like mysql) has the nice\nproperty that the speed of truncation is mostly not dependent on table\nsize: truncating a table with 100 records is not very much faster than\ntruncating a table with millions of records. For very small tables,\nit might be faster to simply fire off a delete.\n\nmerlin\n",
"msg_date": "Wed, 18 Jul 2012 09:33:42 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL db, 30 tables with number of rows < 100 (not\n\thuge) - the fastest way to clean each non-empty table and reset unique\n\tidentifier column of empty ones."
}
] |
[
{
"msg_contents": "Hello PostgreSQL fans,\nI would like to introduce myself and the TPC-V benchmark to the PostgreSQL community. I would then like to ask the community to help us make the TPC-V reference benchmarking kit a success, and establish PostgreSQL as a common DBMS used in measuring the performance of enterprise servers.\n\nI am VMware's rep to the TPC, and chair the TPC's virtualization benchmark development subcommittee. For those of you who don't know the TPC, it is an industry standards consortium, and its benchmarks are the main performance tests for enterprise-class database servers. For external (marketing) use, these benchmarks are the gold standard of comparing different servers, processors, databases, etc. For internal use, they are typically the biggest hammers an organization can use for performance stress testing of their products. TPC benchmarks are one of the workloads (if not the main workload) that processor vendors use to design their products. So the benchmarks are in much heavier use internal to companies than there are official disclosures.\n\nTPC-V is a new benchmark under development for virtualized databases. A TPC-V configuration has:\n- multiple virtual machines running a mix of DSS, OLTP, and business logic apps\n- VMs running with throughputs ranging from 10% to 40% of the total system\n- load elasticity emulating cloud characteristic: The benchmark maintains a constant overall tpsV load level, but the proportion directed to each VM changes every 10 minutes\n\nA paper in the TPC Technical Conference track of VLDB 2010 described the initial motivation and architecture of TPC-V. A paper that has been accepted to the TPC TC track of VLDB 2012 describes in detail the current status of the benchmark.\n\nAll TPC results up to now have been on commercial databases. The majority of active results are on Oracle or Microsoft SQL Server, followed by DB2, Sybase, and other players. Again, keep in mind that these benchmarks aren't meant to only compare DBMS products. In fact the majority of results are \"sponsored\" by server hardware companies. The server hardware, processor, storage, OS, etc. all contribute to the performance. But you can't have a database server benchmark results without a good DBMS!\n\nAnd that's where PostgreSQL comes in. The TPC-V development subcommittee followed the usual path of TPC benchmarks by writing a functional specification, and looking to TPC members to develop benchmarking kits to implement the spec. TPC-V uses the schema and transactions of TPC-E, but the transaction mixes and the way the benchmark is run it totally new and virtualization-specific. We chose to start from TPC-E to accelerate the benchmark development phase: the specification would be easier to write, and DBMS vendors could create TPC-V kits starting from their existing TPC-E kits. Until now, benchmarking kits for various TPC benchmarks have been typically developed by DBMS vendors, and offered to their partners for internal testing or disclosures. So our expectation was that one or more DBMS companies that owned existing TPC-E benchmarking kits would allocate resources to modify their kits to execute the TPC-V transactions, and supply kits to subcommittee members for prototyping. This did not happen (let's not get into the internal politics of the TPC!!), so the subcommittee moved forward with developing its own reference kit. The reference kit has been developed to run on PostgreSQL, and we are focusing our development efforts and testing on PostgreSQL.\n\nThe reference kit will be a first for the TPC, which until now has only published paper functional specifications. This kit will be publically available to anyone who wants to run TPC-V, whether for internal testing, academic studies, or official publications. Commercial DBMS vendors are allowed to develop their own kits and publish with them. Even if commercial DBMS vendors decide later on to develop TPC-V kits, we expect official TPC-V publications with this reference kit using PostgreSQL, and of course a lot of academic use of the kit. I think this will be a boost for the PostgreSQL community (correct me if I am wrong!!).\n\nThe most frequent question to the TPC is \"do you offer a kit to run one of your benchmarks?\". There will finally be such a kit, and it will run on PGSQL.\n\nBut TPC benchmarks is where the big boys play. If we want the reference kit to be credible, it has to have good performance. We don't expect it to beat the commercial databases, but it has to be in the ballpark. We have started our work running the kit in a simple, single-VM, TPC-E type configuration since TPC-E is a known animal with official publications available. We have compared our performance to Microsoft SQL results published on a similar platform. After waving our hands through a number of small differences between the platforms, we have calculated a CPU cost of around 3.2ms/transaction for the published MS SQL results, versus a measurement of 8.6ms/transaction for PostgreSQL. (TPC benchmarks are typically pushed to full CPU utilization. One removes all bottlenecks in storage, networking, etc., to achieve the 100% CPU usage. So CPU cost/tran is the final decider of performance.) So we need to cut the CPU cost of transactions in half to make publications with PostgreSQL comparable to commercial databases. It is OK to be slower than MS SQL or Oracle. The benchmark running PostgreSQL can still be used to compare the performance of servers, processors, and especially, hypervisors under a demanding database workload. But the slower we are, the less credible we are.\n\nSorry for the long post. I will follow up with specific questions next.\n\nThanks,\nReza Taheri\n\n\nHello PostgreSQL fans,I would like to introduce myself and the TPC-V benchmark to the PostgreSQL community. I would then like to ask the community to help us make the TPC-V reference benchmarking kit a success, and establish PostgreSQL as a common DBMS used in measuring the performance of enterprise servers. I am VMware’s rep to the TPC, and chair the TPC’s virtualization benchmark development subcommittee. For those of you who don’t know the TPC, it is an industry standards consortium, and its benchmarks are the main performance tests for enterprise-class database servers. For external (marketing) use, these benchmarks are the gold standard of comparing different servers, processors, databases, etc. For internal use, they are typically the biggest hammers an organization can use for performance stress testing of their products. TPC benchmarks are one of the workloads (if not the main workload) that processor vendors use to design their products. So the benchmarks are in much heavier use internal to companies than there are official disclosures. TPC-V is a new benchmark under development for virtualized databases. A TPC-V configuration has:- multiple virtual machines running a mix of DSS, OLTP, and business logic apps- VMs running with throughputs ranging from 10% to 40% of the total system- load elasticity emulating cloud characteristic: The benchmark maintains a constant overall tpsV load level, but the proportion directed to each VM changes every 10 minutes A paper in the TPC Technical Conference track of VLDB 2010 described the initial motivation and architecture of TPC-V. A paper that has been accepted to the TPC TC track of VLDB 2012 describes in detail the current status of the benchmark. All TPC results up to now have been on commercial databases. The majority of active results are on Oracle or Microsoft SQL Server, followed by DB2, Sybase, and other players. Again, keep in mind that these benchmarks aren’t meant to only compare DBMS products. In fact the majority of results are “sponsored” by server hardware companies. The server hardware, processor, storage, OS, etc. all contribute to the performance. But you can’t have a database server benchmark results without a good DBMS! And that’s where PostgreSQL comes in. The TPC-V development subcommittee followed the usual path of TPC benchmarks by writing a functional specification, and looking to TPC members to develop benchmarking kits to implement the spec. TPC-V uses the schema and transactions of TPC-E, but the transaction mixes and the way the benchmark is run it totally new and virtualization-specific. We chose to start from TPC-E to accelerate the benchmark development phase: the specification would be easier to write, and DBMS vendors could create TPC-V kits starting from their existing TPC-E kits. Until now, benchmarking kits for various TPC benchmarks have been typically developed by DBMS vendors, and offered to their partners for internal testing or disclosures. So our expectation was that one or more DBMS companies that owned existing TPC-E benchmarking kits would allocate resources to modify their kits to execute the TPC-V transactions, and supply kits to subcommittee members for prototyping. This did not happen (let’s not get into the internal politics of the TPC!!), so the subcommittee moved forward with developing its own reference kit. The reference kit has been developed to run on PostgreSQL, and we are focusing our development efforts and testing on PostgreSQL. The reference kit will be a first for the TPC, which until now has only published paper functional specifications. This kit will be publically available to anyone who wants to run TPC-V, whether for internal testing, academic studies, or official publications. Commercial DBMS vendors are allowed to develop their own kits and publish with them. Even if commercial DBMS vendors decide later on to develop TPC-V kits, we expect official TPC-V publications with this reference kit using PostgreSQL, and of course a lot of academic use of the kit. I think this will be a boost for the PostgreSQL community (correct me if I am wrong!!). The most frequent question to the TPC is “do you offer a kit to run one of your benchmarks?”. There will finally be such a kit, and it will run on PGSQL. But TPC benchmarks is where the big boys play. If we want the reference kit to be credible, it has to have good performance. We don’t expect it to beat the commercial databases, but it has to be in the ballpark. We have started our work running the kit in a simple, single-VM, TPC-E type configuration since TPC-E is a known animal with official publications available. We have compared our performance to Microsoft SQL results published on a similar platform. After waving our hands through a number of small differences between the platforms, we have calculated a CPU cost of around 3.2ms/transaction for the published MS SQL results, versus a measurement of 8.6ms/transaction for PostgreSQL. (TPC benchmarks are typically pushed to full CPU utilization. One removes all bottlenecks in storage, networking, etc., to achieve the 100% CPU usage. So CPU cost/tran is the final decider of performance.) So we need to cut the CPU cost of transactions in half to make publications with PostgreSQL comparable to commercial databases. It is OK to be slower than MS SQL or Oracle. The benchmark running PostgreSQL can still be used to compare the performance of servers, processors, and especially, hypervisors under a demanding database workload. But the slower we are, the less credible we are.Sorry for the long post. I will follow up with specific questions next.Thanks,Reza Taheri",
"msg_date": "Tue, 3 Jul 2012 16:08:21 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Introducing the TPC-V benchmark, and its relationship to PostgreSQL"
},
{
"msg_contents": "On 07/04/2012 07:08 AM, Reza Taheri wrote:\n\n> ... so the subcommittee moved forward with developing its own \n> reference kit. The reference kit has been developed to run on \n> PostgreSQL, and we are focusing our development efforts and testing on \n> PostgreSQL.\nThat's a very positive step. The TPC seems to me to have a pretty poor \nreputation among open source database users and vendors. I think that's \nlargely because the schema and tools are typically very closed and \nrestrictively licensed, though the prohibition against publishing \nbenchmarks by big commercial vendors doesn't help.\n\nThis sounds like a promising change. The TPC benchmarks are really good \nfor load-testing and regression testing, so having one that's directly \nPostgreSQL friendly will be a big plus, especially if it is \nappropriately licensed.\n\nThe opportunity to audit the schema, queries, and test setup before the \ntool is finalized would certainly be appealing. What can you publish in \ndraft form now?\n\nWhat license terms does the TPC plan to release the schema, queries, and \ndata for TPC-V under?\n\nI've cc'd Greg Smith and Dave Page, both of whom I suspect will be \ninterested in this development but could easily miss your message. If \nyou haven't read Greg' book \"PostgreSQL High Performance\" it's probably \na good idea to do so.\n\n--\nCraig Ringer\n\n",
"msg_date": "Wed, 04 Jul 2012 13:19:14 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing the TPC-V benchmark, and its relationship\n\tto PostgreSQL"
},
{
"msg_contents": "Thanks for reply, Craig. As far as publishing a draft, we are planning to do something along those lines.\n\nFor the schema and the queries, we are pretty much taking those wholesale from TPC-E, whose specification is public (http://www.tpc.org/tpce/spec/v1.12.0/TPCE-v1.12.0.pdf). The high-level differences with TPC-E are detailed in the 2010 and 2012 TPC TC papers I mentioned. We will stick closely to the TPC-E schema and queries. Anything new means a long specification writing process, which we are trying to avoid. We want to get this benchmark out there quickly.\n\nI am not an expert in licensing. What I can tell you is that the kit will be available to anyone to download and use with a simple EULA based on existing TPC EULAs (although TPC hasn't had a complete end-to-end kit before, it has published partial code modules for its benchmarks). We broached the idea of open sourcing the kit, but it didn't pan out. The people on the subcommittee represent their companies, and different companies have different rules when their employees contribute to open source code. Satisfying the armies of lawyers would have been impossible. So the kit won't be open source, but readily available for use. It will probably be similar to the licensing for SPEC benchmarks if you are familiar with them.\n\nI'll pick up Greg's book. We had been focusing on functionality, but our focus will shift to performance soon. To be blunt, the team is very experienced in benchmarks and in database performance, but most of us are new to PGSQL.\n\nThanks,\nReza \n\n> -----Original Message-----\n> From: Craig Ringer [mailto:[email protected]]\n> Sent: Tuesday, July 03, 2012 10:19 PM\n> To: [email protected]\n> Cc: Reza Taheri; Andy Bond ([email protected]); Greg Kopczynski; Jignesh\n> Shah; Greg Smith; Dave Page\n> Subject: Re: [PERFORM] Introducing the TPC-V benchmark, and its\n> relationship to PostgreSQL\n> \n> On 07/04/2012 07:08 AM, Reza Taheri wrote:\n> \n> > ... so the subcommittee moved forward with developing its own\n> > reference kit. The reference kit has been developed to run on\n> > PostgreSQL, and we are focusing our development efforts and testing on\n> > PostgreSQL.\n> That's a very positive step. The TPC seems to me to have a pretty poor\n> reputation among open source database users and vendors. I think that's\n> largely because the schema and tools are typically very closed and\n> restrictively licensed, though the prohibition against publishing benchmarks\n> by big commercial vendors doesn't help.\n> \n> This sounds like a promising change. The TPC benchmarks are really good for\n> load-testing and regression testing, so having one that's directly PostgreSQL\n> friendly will be a big plus, especially if it is appropriately licensed.\n> \n> The opportunity to audit the schema, queries, and test setup before the\n> tool is finalized would certainly be appealing. What can you publish in draft\n> form now?\n> \n> What license terms does the TPC plan to release the schema, queries, and\n> data for TPC-V under?\n> \n> I've cc'd Greg Smith and Dave Page, both of whom I suspect will be\n> interested in this development but could easily miss your message. If you\n> haven't read Greg' book \"PostgreSQL High Performance\" it's probably a good\n> idea to do so.\n> \n> --\n> Craig Ringer\n\n",
"msg_date": "Wed, 4 Jul 2012 11:24:08 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Introducing the TPC-V benchmark, and its relationship\n\tto PostgreSQL"
},
{
"msg_contents": "On 05/07/12 06:24, Reza Taheri wrote:\n>\n> I'll pick up Greg's book. We had been focusing on functionality, but our focus will shift to performance soon. To be blunt, the team is very experienced in benchmarks and in database performance, but most of us are new to PGSQL.\n>\n>\n\nThe book is well worth a read - even if you are experienced (in fact you \nmay get more out of it in that case). In addition I think it would be \nbeneficial for you to share with us your non default tuning changes you \nmade in postgresql.conf etc (as others have requested) - one of the \nmajor benefits of an open source community is the input of many \nminds...however we need the basic information concerning setup etc to \neven begin to help.\n\nregards\n\nMark\n\n",
"msg_date": "Thu, 05 Jul 2012 10:35:22 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing the TPC-V benchmark, and its relationship\n\tto PostgreSQL"
},
{
"msg_contents": "On 07/03/2012 07:08 PM, Reza Taheri wrote:\n> TPC-V is a new benchmark under development for virtualized databases. A\n> TPC-V configuration has:\n>\n> - multiple virtual machines running a mix of DSS, OLTP, and business\n> logic apps\n>\n> - VMs running with throughputs ranging from 10% to 40% of the total system\n> ..\n\nI think this would be a lot more interesting to the traditional, \ndedicated hardware part of the PostgreSQL community if there was a clear \nway to run this with only a single active machine too. If it's possible \nfor us to use this to compare instances of PostgreSQL on dedicated \nhardware, too, that is enormously more valuable to people with larger \ninstallations. It might be helpful to VMWare as well. Being able to \nsay \"this VM install gets X% of the performance of a bare-metal install\" \nanswers a question I get asked all the time--when people want to decide \nbetween dedicated and virtual setups.\n\nThe PostgreSQL community could use a benchmark like this for its own \nperformance regression testing too. A lot of that work is going to \nhappen on a dedicated machines.\n\n > After waving our hands through a number\n> of small differences between the platforms, we have calculated a CPU\n> cost of around 3.2ms/transaction for the published MS SQL results,\n> versus a measurement of 8.6ms/transaction for PostgreSQL. (TPC\n> benchmarks are typically pushed to full CPU utilization. One removes all\n> bottlenecks in storage, networking, etc., to achieve the 100% CPU usage.\n> So CPU cost/tran is the final decider of performance.) So we need to cut\n> the CPU cost of transactions in half to make publications with\n> PostgreSQL comparable to commercial databases.\n\nI appreciate that getting close to parity here is valuable. This \nsituation is so synthetic though--removing other bottlenecks and looking \nat CPU timing--that it's hard to get too excited about optimizing for \nit. There's a lot of things in PostgreSQL that we know are slower than \ncommercial databases because they're optimized for flexibility (the way \noperators are implemented is the best example) or for long-term code \nmaintenance. Microsoft doesn't care if they have terribly ugly code \nthat runs faster, because no one sees that code. PostgreSQL does care.\n\nThe measure that's more fair is a system cost based ones. What I've \nfound is that a fair number of people note PostgreSQL's low-level code \nisn't quite as fast as some of the less flexible alternatives--hard \ncoding operators is surely cheaper than looking them up each time--but \nthe license cost savings more than pays for bigger hardware to offset \nthat. I wish I had any customer whose database was CPU bound, that \nwould be an awesome world to live in.\n\nAnyway, guessing at causes here is premature speculation. When there's \nsome code for the test kit published, at that point discussing the \nparticulars of why it's not running well will get interesting.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n",
"msg_date": "Thu, 05 Jul 2012 21:25:07 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing the TPC-V benchmark, and its relationship\n\tto PostgreSQL"
},
{
"msg_contents": "Hi Greg,\nYes, a single-instance benchmark is a natural fall-out from the TPC-V kit. Our coding team (4 people working directly on the benchmark with another 3-4 folks helping in various consulting capacities) is tasked with creating a multi-VM benchmark. The benchmark is still missing the critical Market Exchange Emulator function. Once that's done, it would be natural for someone else in the TPC to take our working prototype and simplify it for a single-system, TPC-E (not V) reference kit. The conversion is not technically difficult, but releasing kits is a new path for the TPC, and will take some work.\n\nCheers,\nReza\n\n> -----Original Message-----\n> From: Greg Smith [mailto:[email protected]]\n> Sent: Thursday, July 05, 2012 6:25 PM\n> To: Reza Taheri\n> Cc: [email protected]; Andy Bond ([email protected]);\n> Greg Kopczynski; Jignesh Shah\n> Subject: Re: [PERFORM] Introducing the TPC-V benchmark, and its\n> relationship to PostgreSQL\n> \n> On 07/03/2012 07:08 PM, Reza Taheri wrote:\n> > TPC-V is a new benchmark under development for virtualized databases.\n> > A TPC-V configuration has:\n> >\n> > - multiple virtual machines running a mix of DSS, OLTP, and business\n> > logic apps\n> >\n> > - VMs running with throughputs ranging from 10% to 40% of the total\n> > system ..\n> \n> I think this would be a lot more interesting to the traditional, dedicated\n> hardware part of the PostgreSQL community if there was a clear way to run\n> this with only a single active machine too. If it's possible for us to use this to\n> compare instances of PostgreSQL on dedicated hardware, too, that is\n> enormously more valuable to people with larger installations. It might be\n> helpful to VMWare as well. Being able to say \"this VM install gets X% of the\n> performance of a bare-metal install\"\n> answers a question I get asked all the time--when people want to decide\n> between dedicated and virtual setups.\n> \n> The PostgreSQL community could use a benchmark like this for its own\n> performance regression testing too. A lot of that work is going to happen on\n> a dedicated machines.\n> \n> > After waving our hands through a number\n> > of small differences between the platforms, we have calculated a CPU\n> > cost of around 3.2ms/transaction for the published MS SQL results,\n> > versus a measurement of 8.6ms/transaction for PostgreSQL. (TPC\n> > benchmarks are typically pushed to full CPU utilization. One removes\n> > all bottlenecks in storage, networking, etc., to achieve the 100% CPU\n> usage.\n> > So CPU cost/tran is the final decider of performance.) So we need to\n> > cut the CPU cost of transactions in half to make publications with\n> > PostgreSQL comparable to commercial databases.\n> \n> I appreciate that getting close to parity here is valuable. This situation is so\n> synthetic though--removing other bottlenecks and looking at CPU timing--\n> that it's hard to get too excited about optimizing for it. There's a lot of\n> things in PostgreSQL that we know are slower than commercial databases\n> because they're optimized for flexibility (the way operators are\n> implemented is the best example) or for long-term code maintenance.\n> Microsoft doesn't care if they have terribly ugly code that runs faster,\n> because no one sees that code. PostgreSQL does care.\n> \n> The measure that's more fair is a system cost based ones. What I've found is\n> that a fair number of people note PostgreSQL's low-level code isn't quite as\n> fast as some of the less flexible alternatives--hard coding operators is surely\n> cheaper than looking them up each time--but the license cost savings more\n> than pays for bigger hardware to offset that. I wish I had any customer\n> whose database was CPU bound, that would be an awesome world to live\n> in.\n> \n> Anyway, guessing at causes here is premature speculation. When there's\n> some code for the test kit published, at that point discussing the particulars\n> of why it's not running well will get interesting.\n> \n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n",
"msg_date": "Thu, 5 Jul 2012 20:22:36 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Introducing the TPC-V benchmark, and its relationship\n\tto PostgreSQL"
},
{
"msg_contents": "On 07/06/2012 11:22 AM, Reza Taheri wrote:\n> Hi Greg,\n> Yes, a single-instance benchmark is a natural fall-out from the TPC-V kit. Our coding team (4 people working directly on the benchmark with another 3-4 folks helping in various consulting capacities) is tasked with creating a multi-VM benchmark. The benchmark is still missing the critical Market Exchange Emulator function. Once that's done, it would be natural for someone else in the TPC to take our working prototype and simplify it for a single-system, TPC-E (not V) reference kit. The conversion is not technically difficult, but releasing kits is a new path for the TPC, and will take some work.\nPlease consider releasing sample versions early in the process, \nespecially as such releases are new to the TPC. Giving others the \nopportunity to contribute different skill sets and experiences before \neverything is locked in to the final configuration is important, \nespecially when trying new things.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 06 Jul 2012 11:41:02 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Introducing the TPC-V benchmark, and its relationship\n\tto PostgreSQL"
},
{
"msg_contents": "Yes, I hear you. TPC's usual mode of operation has been to release details after the benchmark is complete. But TPC does have a policy clause that allows publication of draft specifications to get public feedback before the benchmark is complete. Our 2012 TPC TC paper will have a lot of the high level details. We need to see if we can use the draft clause to also release beta versions of code.\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: Craig Ringer [mailto:[email protected]]\n> Sent: Thursday, July 05, 2012 8:41 PM\n> To: Reza Taheri\n> Cc: Greg Smith; [email protected]\n> Subject: Re: [PERFORM] Introducing the TPC-V benchmark, and its\n> relationship to PostgreSQL\n> \n> On 07/06/2012 11:22 AM, Reza Taheri wrote:\n> > Hi Greg,\n> > Yes, a single-instance benchmark is a natural fall-out from the TPC-V kit.\n> Our coding team (4 people working directly on the benchmark with another\n> 3-4 folks helping in various consulting capacities) is tasked with creating a\n> multi-VM benchmark. The benchmark is still missing the critical Market\n> Exchange Emulator function. Once that's done, it would be natural for\n> someone else in the TPC to take our working prototype and simplify it for a\n> single-system, TPC-E (not V) reference kit. The conversion is not technically\n> difficult, but releasing kits is a new path for the TPC, and will take some\n> work.\n> Please consider releasing sample versions early in the process, especially as\n> such releases are new to the TPC. Giving others the opportunity to\n> contribute different skill sets and experiences before everything is locked\n> in to the final configuration is important, especially when trying new things.\n> \n> --\n> Craig Ringer\n",
"msg_date": "Thu, 5 Jul 2012 21:03:44 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Introducing the TPC-V benchmark, and its relationship\n\tto PostgreSQL"
}
] |
[
{
"msg_contents": "Following the earlier email introducing the TPC-V benchmark, and that we are developing an industry standard benchmarking kit for TPC-V using PostgreSQL, here is a specific performance issue we have run into.\n\nIn running a TPC-E prototype of the benchmark on an 8-core Nehalem blade and a disk array with 14 SSDs and 90 spinning drives, we noticed that we are doing a lot more I/O than the TPC-E benchmark is supposed to produce. Digging deeper, we noticed that the I/O rate (around 28K IOPS) was not unreasonable for our combination of SQL queries/table and index sizes/buffer pool size. What was unreasonable was the large size of the tables, and especially, of the indexes.\n\nTo put this in perspective, let us compare our situation to a published TPC-E result on MS SQL at http://bit.ly/QeWXhE. This was run on a similar server, and the database size is close to ours. Our table and index sizes should be 32.5% of the MS SQL size (for those who care, we populated the database with 300,000 customers and 125 Initial Trade Days; they built with 385,000 customers and 300 ITD). Look at page 34 of the disclosure for the table and index sizes, and focus on the large tables. For our large tables, this is what I am seeing:\n\n\n List of relations\nSchema | Name | Type | Owner | Size | Description\n--------+--------------------+-------+-------+------------+-------------\npublic | trade | table | tpce | 402 GB |\n public | cash_transaction | table | tpce | 309 GB |\n public | trade_history | table | tpce | 291 GB |\n public | settlement | table | tpce | 203 GB |\n public | holding_history | table | tpce | 183 GB |\n public | daily_market | table | tpce | 21 GB |\n public | holding | table | tpce | 15 GB |\n\n List of relations\nSchema | Name | Type | Owner | Table | Size | Description\n--------+---------+-------+-------+--------------------+---------+-------------\npublic | idx_th | index | tpce | trade_history | 186 GB |\n public | idx_hh2 | index | tpce | holding_history | 133 GB |\n public | idx_hh | index | tpce | holding_history | 126 GB |\n public | idx_t2 | index | tpce | trade | 119 GB |\n public | idx_t3 | index | tpce | trade | 110 GB |\n public | idx_se | index | tpce | settlement | 63 GB |\n public | idx_t | index | tpce | trade | 62 GB |\n public | idx_ct | index | tpce | cash_transaction | 55 GB |\n public | idx_h2 | index | tpce | holding | 12 GB |\n\nI don't know Dell's exact I/O rate, but judging by their storage configuration and what's expected of the benchmark, we are several times too high. (Even after cutting the database size by a factor of 10, we are around twice the IOPS rate we should be at.)\n\nComparing the table sizes, we are close to 2X larger (more on this in a later note). But the index size is what stands out. Our overall index usage (again, after accounting for different numbers of rows) is 4.8X times larger. 35% of our I/Os are to the index space. I am guessing that the 4.8X ballooning has something to do with this, and that in itself explains a lot about our high I/O rate, as well as higher CPU/tran cycles compared to MS SQL (we are 2.5-3 times slower).\n\nSo I looked more closely at the indexes. I chose the CASH_TRANSACTION table since it has a single index, and we can compare it more directly to the Dell data. If you look at page 34 of http://bit.ly/QeWXhE, the index size of CT is 1,278,720KB for 6,120,529,488 rows. That's less than one byte of index per data row! How could that be? Well, MS SQL used a \"clustered index\" for CT, i.e., the data is held in the leaf pages of the index B-Tree. The data and index are in one data structure. Once you lookup the index, you also have the data at zero additional cost. For PGSQL, we had to create a regular index, which took up 55GB. Once you do the math, this works out to around 30 bytes per row. I imagine we have the 15-byte key along with a couple of 4-byte or 8-byte pointers.\n\nSo MS SQL beats PGSQL by a) having a lower I/O rate due to no competition for the buffer pool from indexes (except for secondary indexes); and b) by getting the data with a free lookup, whereas we have to work our way down both the index and the data trees.\n\nDell created a clustered index for every single one of the 33 tables. Folks, past experiences with relational databases and TPC benchmarks tells me this could affect the bottom line performance of the benchmark by as much as 2X.\n\nChecking online, the subject of clustered indexes for PostgreSQL comes up often. PGSQL does have a concept called \"clustered table\", which means a table has been organized in the order of an index. This would help with sequential accesses to a table, but has nothing to do with this problem. PGSQL folks sometimes refer to what we want as \"integrated index\".\n\nIs the PGSQL community willing to invest in a feature that a) has been requested by many others already; and b) can make a huge difference in a benchmark that can lend substantial credibility to PGSQL performance?\n\nThanks,\nReza\n\n\nFollowing the earlier email introducing the TPC-V benchmark, and that we are developing an industry standard benchmarking kit for TPC-V using PostgreSQL, here is a specific performance issue we have run into. In running a TPC-E prototype of the benchmark on an 8-core Nehalem blade and a disk array with 14 SSDs and 90 spinning drives, we noticed that we are doing a lot more I/O than the TPC-E benchmark is supposed to produce. Digging deeper, we noticed that the I/O rate (around 28K IOPS) was not unreasonable for our combination of SQL queries/table and index sizes/buffer pool size. What was unreasonable was the large size of the tables, and especially, of the indexes. To put this in perspective, let us compare our situation to a published TPC-E result on MS SQL at http://bit.ly/QeWXhE. This was run on a similar server, and the database size is close to ours. Our table and index sizes should be 32.5% of the MS SQL size (for those who care, we populated the database with 300,000 customers and 125 Initial Trade Days; they built with 385,000 customers and 300 ITD). Look at page 34 of the disclosure for the table and index sizes, and focus on the large tables. For our large tables, this is what I am seeing: List of relationsSchema | Name | Type | Owner | Size | Description --------+--------------------+-------+-------+------------+------------- public | trade | table | tpce | 402 GB | public | cash_transaction | table | tpce | 309 GB | public | trade_history | table | tpce | 291 GB | public | settlement | table | tpce | 203 GB | public | holding_history | table | tpce | 183 GB | public | daily_market | table | tpce | 21 GB | public | holding | table | tpce | 15 GB | List of relationsSchema | Name | Type | Owner | Table | Size | Description --------+---------+-------+-------+--------------------+---------+------------- public | idx_th | index | tpce | trade_history | 186 GB | public | idx_hh2 | index | tpce | holding_history | 133 GB | public | idx_hh | index | tpce | holding_history | 126 GB | public | idx_t2 | index | tpce | trade | 119 GB | public | idx_t3 | index | tpce | trade | 110 GB | public | idx_se | index | tpce | settlement | 63 GB | public | idx_t | index | tpce | trade | 62 GB | public | idx_ct | index | tpce | cash_transaction | 55 GB | public | idx_h2 | index | tpce | holding | 12 GB | I don’t know Dell’s exact I/O rate, but judging by their storage configuration and what’s expected of the benchmark, we are several times too high. (Even after cutting the database size by a factor of 10, we are around twice the IOPS rate we should be at.) Comparing the table sizes, we are close to 2X larger (more on this in a later note). But the index size is what stands out. Our overall index usage (again, after accounting for different numbers of rows) is 4.8X times larger. 35% of our I/Os are to the index space. I am guessing that the 4.8X ballooning has something to do with this, and that in itself explains a lot about our high I/O rate, as well as higher CPU/tran cycles compared to MS SQL (we are 2.5-3 times slower). So I looked more closely at the indexes. I chose the CASH_TRANSACTION table since it has a single index, and we can compare it more directly to the Dell data. If you look at page 34 of http://bit.ly/QeWXhE, the index size of CT is 1,278,720KB for 6,120,529,488 rows. That’s less than one byte of index per data row! How could that be? Well, MS SQL used a “clustered index” for CT, i.e., the data is held in the leaf pages of the index B-Tree. The data and index are in one data structure. Once you lookup the index, you also have the data at zero additional cost. For PGSQL, we had to create a regular index, which took up 55GB. Once you do the math, this works out to around 30 bytes per row. I imagine we have the 15-byte key along with a couple of 4-byte or 8-byte pointers. So MS SQL beats PGSQL by a) having a lower I/O rate due to no competition for the buffer pool from indexes (except for secondary indexes); and b) by getting the data with a free lookup, whereas we have to work our way down both the index and the data trees. Dell created a clustered index for every single one of the 33 tables. Folks, past experiences with relational databases and TPC benchmarks tells me this could affect the bottom line performance of the benchmark by as much as 2X. Checking online, the subject of clustered indexes for PostgreSQL comes up often. PGSQL does have a concept called “clustered table”, which means a table has been organized in the order of an index. This would help with sequential accesses to a table, but has nothing to do with this problem. PGSQL folks sometimes refer to what we want as “integrated index”. Is the PGSQL community willing to invest in a feature that a) has been requested by many others already; and b) can make a huge difference in a benchmark that can lend substantial credibility to PGSQL performance?Thanks,Reza",
"msg_date": "Tue, 3 Jul 2012 16:13:15 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "On 07/04/2012 07:13 AM, Reza Taheri wrote:\n>\n> Following the earlier email introducing the TPC-V benchmark, and that \n> we are developing an industry standard benchmarking kit for TPC-V \n> using PostgreSQL, here is a specific performance issue we have run into.\n>\n\nWhich version of PostgreSQL are you using?\n\nHow has it been tuned beyond the defaults - autovacuum settings, \nshared_buffers, effective_cache_size, WAL settings, etc?\n\nHow much RAM is on the blade? What OS and version are on the blade?\n\n> Comparing the table sizes, we are close to 2X larger (more on this in \n> a later note). But the index size is what stands out. Our overall \n> index usage (again, after accounting for different numbers of rows) is \n> 4.8X times larger. 35% of our I/Os are to the index space. I am \n> guessing that the 4.8X ballooning has something to do with this, and \n> that in itself explains a lot about our high I/O rate, as well as \n> higher CPU/tran cycles compared to MS SQL (we are 2.5-3 times slower).\n>\nThis is making me wonder about bloat issues and whether proper vacuuming \nis being done. If the visibility map and free space map aren't \nmaintained by proper vaccum operation everything gets messy, fast.\n\n> Well, MS SQL used a \"clustered index\" for CT, i.e., the data is held \n> in the leaf pages of the index B-Tree. The data and index are in one \n> data structure. Once you lookup the index, you also have the data at \n> zero additional cost.\n>\n> [snip]\n>\n> Is the PGSQL community willing to invest in a feature that a) has been \n> requested by many others already; and b) can make a huge difference in \n> a benchmark that can lend substantial credibility to PGSQL performance?\n>\n\nwhile PostgreSQL doesn't support covering indexes or clustered indexes \nat this point, 9.2 has added support for index-only scans, which are a \nhalf-way point of sorts. See:\n\nhttp://rhaas.blogspot.com.au/2011/10/index-only-scans-weve-got-em.html\nhttp://rhaas.blogspot.com.au/2010/11/index-only-scans.html\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a2822fb9337a21f98ac4ce850bb4145acf47ca27\n\nIf at all possible please see how your test is affected by this \nPostgreSQL 9.2 enhancement. It should make a big difference, and if it \ndoesn't it's important to know why.\n\n(CC'd Robert Haas)\n\nI'm not sure what the best option for getting a 9.2 beta build for \nWindows is.\n\n\nAs for the \"invest\" side - that's really a matter for EnterpriseDB, \nCommand Prompt, Red Hat, and the other backers who're employing people \nto work on the DB. Consider asking on pgsql-hackers, too; if nothing \nelse you'll get a good explanation of the current state and progress \ntoward clustered indexes.\n\nSome links that may be useful to you are:\n\nhttp://wiki.postgresql.org/wiki/Todo\n Things that it'd be good to support/implement at some point. \nSurprisingly, covering/clustered indexes aren't on there or at least \naren't easily found. It's certainly a much-desired feature despite its \napparent absence from the TODO.\n\nhttp://wiki.postgresql.org/wiki/PostgreSQL_9.2_Development_Plan\nhttp://wiki.postgresql.org/wiki/PostgreSQL_9.2_Open_Items\n\n--\nCraig Ringer\n\n\n\n\n\n\n\nOn 07/04/2012 07:13 AM, Reza Taheri\n wrote:\n\n\n\n\nFollowing the earlier email introducing the TPC-V benchmark,\n and that we are developing an industry standard benchmarking\n kit for TPC-V using PostgreSQL, here is a specific performance\n issue we have run into. \n\n\n\n Which version of PostgreSQL are you using?\n\n How has it been tuned beyond the defaults - autovacuum settings,\n shared_buffers, effective_cache_size, WAL settings, etc?\n\n How much RAM is on the blade? What OS and version are on the blade?\n\n\n\nComparing the table sizes, we are close to 2X larger (more on\n this in a later note). But the index size is what stands out.\n Our overall index usage (again, after accounting for different\n numbers of rows) is 4.8X times larger. 35% of our I/Os are to\n the index space. I am guessing that the 4.8X ballooning has\n something to do with this, and that in itself explains a lot\n about our high I/O rate, as well as higher CPU/tran cycles\n compared to MS SQL (we are 2.5-3 times slower). \n\n\n This is making me wonder about bloat issues and whether proper\n vacuuming is being done. If the visibility map and free space map\n aren't maintained by proper vaccum operation everything gets messy,\n fast.\n\n\nWell, MS SQL used a “clustered index” for CT, i.e., the data\n is held in the leaf pages of the index B-Tree. The data and\n index are in one data structure. Once you lookup the index, you\n also have the data at zero additional cost.\n \n[snip]\n\n Is the PGSQL community willing to invest in a feature that\n a) has been requested by many others already; and b) can\n make a huge difference in a benchmark that can lend\n substantial credibility to PGSQL performance? \n\n\n\n\n while PostgreSQL doesn't support covering indexes or clustered\n indexes at this point, 9.2 has added support for index-only scans,\n which are a half-way point of sorts. See:\n\n \n \nhttp://rhaas.blogspot.com.au/2011/10/index-only-scans-weve-got-em.html\n \n \nhttp://rhaas.blogspot.com.au/2010/11/index-only-scans.html\n \n \nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a2822fb9337a21f98ac4ce850bb4145acf47ca27\n\n If at all possible please see how your test is affected by this\n PostgreSQL 9.2 enhancement. It should make a big difference, and if\n it doesn't it's important to know why. \n\n (CC'd Robert Haas)\n\n I'm not sure what the best option for getting a 9.2 beta build for\n Windows is.\n\n\n As for the \"invest\" side - that's really a matter for EnterpriseDB,\n Command Prompt, Red Hat, and the other backers who're employing\n people to work on the DB. Consider asking on pgsql-hackers, too; if\n nothing else you'll get a good explanation of the current state and\n progress toward clustered indexes.\n\n Some links that may be useful to you are:\n\n \n \nhttp://wiki.postgresql.org/wiki/Todo\n Things that it'd be good to support/implement at some point.\n Surprisingly, covering/clustered indexes aren't on there or at least\n aren't easily found. It's certainly a much-desired feature despite\n its apparent absence from the TODO.\n\n \n \nhttp://wiki.postgresql.org/wiki/PostgreSQL_9.2_Development_Plan\n \n \nhttp://wiki.postgresql.org/wiki/PostgreSQL_9.2_Open_Items\n\n --\n Craig Ringer",
"msg_date": "Wed, 04 Jul 2012 13:43:43 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "Craig Ringer, 04.07.2012 07:43:\n\n> I'm not sure what the best option for getting a 9.2 beta build for Windows is.\n\nDownload the ZIP from here:\n\nhttp://www.enterprisedb.com/products-services-training/pgbindownload\n\nUnzip, initdb, pg_ctl start\n\nRegards\nThomas\n\n\n\n",
"msg_date": "Wed, 04 Jul 2012 09:57:30 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "On 07/04/2012 03:57 PM, Thomas Kellerer wrote:\n> Craig Ringer, 04.07.2012 07:43:\n>\n>> I'm not sure what the best option for getting a 9.2 beta build for \n>> Windows is.\n>\n> Download the ZIP from here:\n>\n> http://www.enterprisedb.com/products-services-training/pgbindownload\n\nGah, I'm blind. I looked at that page twice and failed to see the \nentries for the beta. Sorry.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 04 Jul 2012 16:31:39 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "On Tue, Jul 3, 2012 at 10:43 PM, Craig Ringer <[email protected]> wrote:\n> On 07/04/2012 07:13 AM, Reza Taheri wrote:\n>\n> Following the earlier email introducing the TPC-V benchmark, and that we are\n> developing an industry standard benchmarking kit for TPC-V using PostgreSQL,\n> here is a specific performance issue we have run into.\n>\n>\n> Which version of PostgreSQL are you using?\n>\n> How has it been tuned beyond the defaults - autovacuum settings,\n> shared_buffers, effective_cache_size, WAL settings, etc?\n>\n> How much RAM is on the blade? What OS and version are on the blade?\n>\n>\n> Comparing the table sizes, we are close to 2X larger (more on this in a\n> later note). But the index size is what stands out. Our overall index usage\n> (again, after accounting for different numbers of rows) is 4.8X times\n> larger. 35% of our I/Os are to the index space. I am guessing that the 4.8X\n> ballooning has something to do with this, and that in itself explains a lot\n> about our high I/O rate, as well as higher CPU/tran cycles compared to MS\n> SQL (we are 2.5-3 times slower).\n>\n> This is making me wonder about bloat issues and whether proper vacuuming is\n> being done. If the visibility map and free space map aren't maintained by\n> proper vaccum operation everything gets messy, fast.\n>\n> Well, MS SQL used a “clustered index” for CT, i.e., the data is held in the\n> leaf pages of the index B-Tree. The data and index are in one data\n> structure. Once you lookup the index, you also have the data at zero\n> additional cost.\n>\n> [snip]\n>\n>\n>\n> Is the PGSQL community willing to invest in a feature that a) has been\n> requested by many others already; and b) can make a huge difference in a\n> benchmark that can lend substantial credibility to PGSQL performance?\n>\n>\n> while PostgreSQL doesn't support covering indexes or clustered indexes at\n> this point, 9.2 has added support for index-only scans, which are a half-way\n> point of sorts. See:\n>\n> http://rhaas.blogspot.com.au/2011/10/index-only-scans-weve-got-em.html\n> http://rhaas.blogspot.com.au/2010/11/index-only-scans.html\n>\n> http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a2822fb9337a21f98ac4ce850bb4145acf47ca27\n>\n> If at all possible please see how your test is affected by this PostgreSQL\n> 9.2 enhancement. It should make a big difference, and if it doesn't it's\n> important to know why.\n>\n> (CC'd Robert Haas)\n>\n> I'm not sure what the best option for getting a 9.2 beta build for Windows\n> is.\n>\n>\n> As for the \"invest\" side - that's really a matter for EnterpriseDB, Command\n> Prompt, Red Hat, and the other backers who're employing people to work on\n> the DB. Consider asking on pgsql-hackers, too; if nothing else you'll get a\n> good explanation of the current state and progress toward clustered indexes.\n>\n> Some links that may be useful to you are:\n>\n> http://wiki.postgresql.org/wiki/Todo\n> Things that it'd be good to support/implement at some point. Surprisingly,\n> covering/clustered indexes aren't on there or at least aren't easily found.\n> It's certainly a much-desired feature despite its apparent absence from the\n> TODO.\n\nI think there is, deservingly, a lot of hesitation to implement a\nstrictly ordered table construct. A similar feature that didn't quite\nget finished -- but maybe can be beaten into shape -- is the\ngrouped-index-tuple implementation:\n\nhttp://community.enterprisedb.com/git/\n\nIt is mentioned on the TODO page. It's under the category that is\nperhaps poorly syntactically overloaded in the world \"cluster\".\n\n-- \nfdr\n",
"msg_date": "Wed, 4 Jul 2012 06:39:38 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "On Tue, Jul 3, 2012 at 8:13 PM, Reza Taheri <[email protected]> wrote:\n> So I looked more closely at the indexes. I chose the CASH_TRANSACTION\n> table since it has a single index, and we can compare it more directly to the\n> Dell data. If you look at page 34 of http://bit.ly/QeWXhE, the index size of CT\n> is 1,278,720KB for 6,120,529,488 rows. That’s less than one byte of index\n> per data row! How could that be? Well, MS SQL used a “clustered index”\n> for CT, i.e., the data is held in the leaf pages of the index B-Tree.\n> The data and index are in one data structure. Once you lookup the index,\n> you also have the data at zero additional cost. For PGSQL, we had to create\n> a regular index, which took up 55GB. Once you do the math, this works out\n> to around 30 bytes per row. I imagine we have the 15-byte key along with a\n> couple of 4-byte or 8-byte pointers.\n...\n> So MS SQL beats PGSQL by a) having a lower I/O rate due to no competition\n> for the buffer pool from indexes (except for secondary indexes); and b) by\n> getting the data with a free lookup, whereas we have to work our way down\n> both the index and the data trees.\n\n15-byte key?\n\nWhat about not storing the keys, but a hash, for leaf nodes?\n\nAssuming it can be made to work for both \"range\" and \"equality\" scans,\nholding only hashes on leaf nodes would reduce index size, but how\nmuch?\n\nI think it's doable, and I could come up with a spec if it's worth it.\nIt would have to scan the heap for only two extra index pages (the\nextremes that cannot be ruled out) and hash collisions, which doesn't\nseem like a big loss versus the reduced index.\n",
"msg_date": "Wed, 4 Jul 2012 11:26:36 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "On Wed, Jul 4, 2012 at 1:13 AM, Reza Taheri <[email protected]> wrote:\n\n> Checking online, the subject of clustered indexes for PostgreSQL comes up\n> often. PGSQL does have a concept called “clustered table”, which means a\n> table has been organized in the order of an index. This would help with\n> sequential accesses to a table, but has nothing to do with this problem.\n> PGSQL folks sometimes refer to what we want as “integrated index”.\n\nI do understand this correctly that we are speaking about the concept\nwhich is known under the term \"index organized table\" (IOT) in Oracle\nland, correct?\n\nhttp://docs.oracle.com/cd/E11882_01/server.112/e25789/indexiot.htm#CBBJEBIH\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Thu, 5 Jul 2012 14:30:23 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "Hi Daniel,\nYes, it sounds like GIT will take us half the way there by getting rid of much of the index I/O if we cluster the tables. We can set the fillfactor parameter to keep tables sorted after updates. I am not sure what impact inserts will have since the primary key keeps growing with new inserts, so perhaps the table will maintain the cluster order and the benefits of GIT for new rows, too. GIT won't save CPU cycles the way a clustered/integrated index would, and actually adds to the CPU cost since the data page has to be searched for the desired tuple.\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: Daniel Farina [mailto:[email protected]]\n> Sent: Wednesday, July 04, 2012 6:40 AM\n> To: Craig Ringer\n> Cc: Reza Taheri; [email protected]; Robert Haas\n> Subject: Re: [PERFORM] The need for clustered indexes to boost TPC-V\n> performance\n> \n> On Tue, Jul 3, 2012 at 10:43 PM, Craig Ringer <[email protected]> wrote:\n> > On 07/04/2012 07:13 AM, Reza Taheri wrote:\n> >\n> > Following the earlier email introducing the TPC-V benchmark, and that\n> > we are developing an industry standard benchmarking kit for TPC-V\n> > using PostgreSQL, here is a specific performance issue we have run into.\n> >\n> >\n> > Which version of PostgreSQL are you using?\n> >\n> > How has it been tuned beyond the defaults - autovacuum settings,\n> > shared_buffers, effective_cache_size, WAL settings, etc?\n> >\n> > How much RAM is on the blade? What OS and version are on the blade?\n> >\n> >\n> > Comparing the table sizes, we are close to 2X larger (more on this in\n> > a later note). But the index size is what stands out. Our overall\n> > index usage (again, after accounting for different numbers of rows) is\n> > 4.8X times larger. 35% of our I/Os are to the index space. I am\n> > guessing that the 4.8X ballooning has something to do with this, and\n> > that in itself explains a lot about our high I/O rate, as well as\n> > higher CPU/tran cycles compared to MS SQL (we are 2.5-3 times slower).\n> >\n> > This is making me wonder about bloat issues and whether proper\n> > vacuuming is being done. If the visibility map and free space map\n> > aren't maintained by proper vaccum operation everything gets messy,\n> fast.\n> >\n> > Well, MS SQL used a \"clustered index\" for CT, i.e., the data is held\n> > in the leaf pages of the index B-Tree. The data and index are in one\n> > data structure. Once you lookup the index, you also have the data at\n> > zero additional cost.\n> >\n> > [snip]\n> >\n> >\n> >\n> > Is the PGSQL community willing to invest in a feature that a) has been\n> > requested by many others already; and b) can make a huge difference in\n> > a benchmark that can lend substantial credibility to PGSQL performance?\n> >\n> >\n> > while PostgreSQL doesn't support covering indexes or clustered indexes\n> > at this point, 9.2 has added support for index-only scans, which are a\n> > half-way point of sorts. See:\n> >\n> > http://rhaas.blogspot.com.au/2011/10/index-only-scans-weve-got-\n> em.html\n> > http://rhaas.blogspot.com.au/2010/11/index-only-scans.html\n> >\n> > http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a2822fb9\n> > 337a21f98ac4ce850bb4145acf47ca27\n> >\n> > If at all possible please see how your test is affected by this\n> > PostgreSQL\n> > 9.2 enhancement. It should make a big difference, and if it doesn't\n> > it's important to know why.\n> >\n> > (CC'd Robert Haas)\n> >\n> > I'm not sure what the best option for getting a 9.2 beta build for\n> > Windows is.\n> >\n> >\n> > As for the \"invest\" side - that's really a matter for EnterpriseDB,\n> > Command Prompt, Red Hat, and the other backers who're employing\n> people\n> > to work on the DB. Consider asking on pgsql-hackers, too; if nothing\n> > else you'll get a good explanation of the current state and progress toward\n> clustered indexes.\n> >\n> > Some links that may be useful to you are:\n> >\n> > http://wiki.postgresql.org/wiki/Todo\n> > Things that it'd be good to support/implement at some point.\n> > Surprisingly, covering/clustered indexes aren't on there or at least aren't\n> easily found.\n> > It's certainly a much-desired feature despite its apparent absence\n> > from the TODO.\n> \n> I think there is, deservingly, a lot of hesitation to implement a strictly\n> ordered table construct. A similar feature that didn't quite get finished --\n> but maybe can be beaten into shape -- is the grouped-index-tuple\n> implementation:\n> \n> http://community.enterprisedb.com/git/\n> \n> It is mentioned on the TODO page. It's under the category that is perhaps\n> poorly syntactically overloaded in the world \"cluster\".\n> \n> --\n> fdr\n",
"msg_date": "Thu, 5 Jul 2012 12:10:04 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V\n performance"
},
{
"msg_contents": "Hi Robert,\nYes, the same concept. Oracle's IOT feature is used often with TPC benchmarks.\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: Robert Klemme [mailto:[email protected]]\n> Sent: Thursday, July 05, 2012 5:30 AM\n> To: Reza Taheri\n> Cc: [email protected]; Andy Bond ([email protected]);\n> Greg Kopczynski; Jignesh Shah\n> Subject: Re: [PERFORM] The need for clustered indexes to boost TPC-V\n> performance\n> \n> On Wed, Jul 4, 2012 at 1:13 AM, Reza Taheri <[email protected]> wrote:\n> \n> > Checking online, the subject of clustered indexes for PostgreSQL comes\n> > up often. PGSQL does have a concept called \"clustered table\", which\n> > means a table has been organized in the order of an index. This would\n> > help with sequential accesses to a table, but has nothing to do with this\n> problem.\n> > PGSQL folks sometimes refer to what we want as \"integrated index\".\n> \n> I do understand this correctly that we are speaking about the concept which\n> is known under the term \"index organized table\" (IOT) in Oracle land,\n> correct?\n> \n> http://docs.oracle.com/cd/E11882_01/server.112/e25789/indexiot.htm#CBB\n> JEBIH\n> \n> Kind regards\n> \n> robert\n> \n> --\n> remember.guy do |as, often| as.you_can - without end\n> http://blog.rubybestpractices.com/\n",
"msg_date": "Thu, 5 Jul 2012 12:13:09 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V\n performance"
},
{
"msg_contents": "On Thu, Jul 5, 2012 at 12:13 PM, Reza Taheri <[email protected]> wrote:\n\n> Hi Robert,\n> Yes, the same concept. Oracle's IOT feature is used often with TPC\n> benchmarks.\n>\n>\nReza, it would be very helpful if you were to provide the list with a lot\nmore information about your current software and hardware configuration\nbefore coming to the conclusion that the only possible way forward is with\na significant architectural change to the db engine itself. Not only is it\nnot at all clear that you are extracting maximum performance from your\ncurrent hardware and software, but I doubt anyone is particularly\ninterested in doing a bunch of development purely to game a benchmark.\n There has been significant discussion of the necessity and viability of\nthe feature you are requesting in the past, so you should probably start\nwhere those discussions left off rather than starting the discussion all\nover again from the beginning. Of course, if vmware were to sponsor\ndevelopment of the feature in question, it probably wouldn't require nearly\nas much buy-in from the wider community.\n\nGetting back to the current performance issues - I have little doubt that\nthe MS SQL benchmark was set up and run by people who were intimately\nfamiliar with MS SQL performance tuning. You stated in your earlier email\nthat your team doesn't have significant postgresql-specific experience, so\nit isn't necessarily surprising that your first attempt at tuning didn't\nget the results that you are looking for. You stated that you have 14 SSDs\nand 90 spinning drives, but you don't specify how they are combined and how\nthe database is laid out on top of them. There is no mention of how much\nmemory is available to the system. We don't know how you've configured\npostgresql's memory allocation or how your config weights the relative\ncosts of index lookups, sequential scans, etc. The guidelines for this\nmailing list include instructions for what information should be provided\nwhen asking about performance improvements.\nhttp://archives.postgresql.org/pgsql-performance/ Let's start by\nascertaining how your benchmark results can be improved without engaging in\na significant development effort on the db engine itself.\n\nOn Thu, Jul 5, 2012 at 12:13 PM, Reza Taheri <[email protected]> wrote:\nHi Robert,\nYes, the same concept. Oracle's IOT feature is used often with TPC benchmarks.\nReza, it would be very helpful if you were to provide the list with a lot more information about your current software and hardware configuration before coming to the conclusion that the only possible way forward is with a significant architectural change to the db engine itself. Not only is it not at all clear that you are extracting maximum performance from your current hardware and software, but I doubt anyone is particularly interested in doing a bunch of development purely to game a benchmark. There has been significant discussion of the necessity and viability of the feature you are requesting in the past, so you should probably start where those discussions left off rather than starting the discussion all over again from the beginning. Of course, if vmware were to sponsor development of the feature in question, it probably wouldn't require nearly as much buy-in from the wider community.\nGetting back to the current performance issues - I have little doubt that the MS SQL benchmark was set up and run by people who were intimately familiar with MS SQL performance tuning. You stated in your earlier email that your team doesn't have significant postgresql-specific experience, so it isn't necessarily surprising that your first attempt at tuning didn't get the results that you are looking for. You stated that you have 14 SSDs and 90 spinning drives, but you don't specify how they are combined and how the database is laid out on top of them. There is no mention of how much memory is available to the system. We don't know how you've configured postgresql's memory allocation or how your config weights the relative costs of index lookups, sequential scans, etc. The guidelines for this mailing list include instructions for what information should be provided when asking about performance improvements. http://archives.postgresql.org/pgsql-performance/ Let's start by ascertaining how your benchmark results can be improved without engaging in a significant development effort on the db engine itself.",
"msg_date": "Thu, 5 Jul 2012 12:46:13 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "Hi Samuel,\nThe SSDs were used as a cache for the spinning drives. Here is a 30-second iostat sample representative of the whole run:\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 24.87 0.00 12.54 62.39 0.00 0.20\n\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util\nsdd 0.00 137.37 3058.40 106.17 34691.60 974.13 22.54 15.75 4.98 0.32 100.00\nsde 0.00 136.07 3063.37 107.70 35267.07 975.07 22.86 15.58 4.92 0.32 100.00\nsdf 0.00 135.37 3064.23 109.53 35815.60 979.60 23.19 15.82 4.99 0.32 100.00\nsdg 0.00 136.97 3066.57 116.67 35196.53 1014.53 22.75 15.87 4.99 0.31 100.00\nsdi 0.00 2011.03 0.00 87.90 0.00 8395.73 191.03 0.13 1.45 1.42 12.51\nsdk 0.00 136.63 3066.83 107.53 35805.07 976.67 23.17 16.01 5.04 0.32 100.00\nsdm 0.00 138.50 3054.40 111.10 34674.27 998.40 22.54 15.52 4.91 0.32 100.00\nsdj 0.00 136.73 3058.70 118.20 35227.20 1019.73 22.82 15.81 4.98 0.31 100.00\nsdl 0.00 137.53 3044.97 109.33 34448.00 987.47 22.47 15.78 5.00 0.32 100.00\n\nThe data and index tablespaces were striped across the 8 LUNs, and saw an average 5ms response. We can beef up the storage to handle more I/Os so that our utilization doesn't stay below 40%, but that misses the point: we have an I/O rate twice the commercial database because they used clustered indexes.\n\nI provided more config details in an earlier email.\n\nAs for asking for development to game a benchmark, no one is asking for benchmark specials. The question of enhancements in response to benchmark needs is an age old question. We can get into that, but it's really a different discussion. Let me just expose the flip side of it: are we willing to watch people use other databases to run benchmarks but feel content that no features were developed specifically in response to benchmark results?\n\nI am trying to engage with the community. We can drown the mailing list with details. So I decided to open the discussion with the high level points, and we will give you all the details that you want as we move forward.\n\nThanks,\nReza\n\nFrom: Samuel Gendler [mailto:[email protected]]\nSent: Thursday, July 05, 2012 12:46 PM\nTo: Reza Taheri\nCc: Robert Klemme; [email protected]\nSubject: Re: [PERFORM] The need for clustered indexes to boost TPC-V performance\n\n\nOn Thu, Jul 5, 2012 at 12:13 PM, Reza Taheri <[email protected]<mailto:[email protected]>> wrote:\nHi Robert,\nYes, the same concept. Oracle's IOT feature is used often with TPC benchmarks.\n\nReza, it would be very helpful if you were to provide the list with a lot more information about your current software and hardware configuration before coming to the conclusion that the only possible way forward is with a significant architectural change to the db engine itself. Not only is it not at all clear that you are extracting maximum performance from your current hardware and software, but I doubt anyone is particularly interested in doing a bunch of development purely to game a benchmark. There has been significant discussion of the necessity and viability of the feature you are requesting in the past, so you should probably start where those discussions left off rather than starting the discussion all over again from the beginning. Of course, if vmware were to sponsor development of the feature in question, it probably wouldn't require nearly as much buy-in from the wider community.\n\nGetting back to the current performance issues - I have little doubt that the MS SQL benchmark was set up and run by people who were intimately familiar with MS SQL performance tuning. You stated in your earlier email that your team doesn't have significant postgresql-specific experience, so it isn't necessarily surprising that your first attempt at tuning didn't get the results that you are looking for. You stated that you have 14 SSDs and 90 spinning drives, but you don't specify how they are combined and how the database is laid out on top of them. There is no mention of how much memory is available to the system. We don't know how you've configured postgresql's memory allocation or how your config weights the relative costs of index lookups, sequential scans, etc. The guidelines for this mailing list include instructions for what information should be provided when asking about performance improvements. http://archives.postgresql.org/pgsql-performance/ Let's start by ascertaining how your benchmark results can be improved without engaging in a significant development effort on the db engine itself.\n\n\n\n\nHi Samuel,The SSDs were used as a cache for the spinning drives. Here is a 30-second iostat sample representative of the whole run: avg-cpu: %user %nice %system %iowait %steal %idle 24.87 0.00 12.54 62.39 0.00 0.20 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %utilsdd 0.00 137.37 3058.40 106.17 34691.60 974.13 22.54 15.75 4.98 0.32 100.00sde 0.00 136.07 3063.37 107.70 35267.07 975.07 22.86 15.58 4.92 0.32 100.00sdf 0.00 135.37 3064.23 109.53 35815.60 979.60 23.19 15.82 4.99 0.32 100.00sdg 0.00 136.97 3066.57 116.67 35196.53 1014.53 22.75 15.87 4.99 0.31 100.00sdi 0.00 2011.03 0.00 87.90 0.00 8395.73 191.03 0.13 1.45 1.42 12.51sdk 0.00 136.63 3066.83 107.53 35805.07 976.67 23.17 16.01 5.04 0.32 100.00sdm 0.00 138.50 3054.40 111.10 34674.27 998.40 22.54 15.52 4.91 0.32 100.00sdj 0.00 136.73 3058.70 118.20 35227.20 1019.73 22.82 15.81 4.98 0.31 100.00sdl 0.00 137.53 3044.97 109.33 34448.00 987.47 22.47 15.78 5.00 0.32 100.00 The data and index tablespaces were striped across the 8 LUNs, and saw an average 5ms response. We can beef up the storage to handle more I/Os so that our utilization doesn’t stay below 40%, but that misses the point: we have an I/O rate twice the commercial database because they used clustered indexes.I provided more config details in an earlier email. As for asking for development to game a benchmark, no one is asking for benchmark specials. The question of enhancements in response to benchmark needs is an age old question. We can get into that, but it’s really a different discussion. Let me just expose the flip side of it: are we willing to watch people use other databases to run benchmarks but feel content that no features were developed specifically in response to benchmark results? I am trying to engage with the community. We can drown the mailing list with details. So I decided to open the discussion with the high level points, and we will give you all the details that you want as we move forward.Thanks,Reza From: Samuel Gendler [mailto:[email protected]] Sent: Thursday, July 05, 2012 12:46 PMTo: Reza TaheriCc: Robert Klemme; [email protected]: Re: [PERFORM] The need for clustered indexes to boost TPC-V performance On Thu, Jul 5, 2012 at 12:13 PM, Reza Taheri <[email protected]> wrote:Hi Robert,Yes, the same concept. Oracle's IOT feature is used often with TPC benchmarks. Reza, it would be very helpful if you were to provide the list with a lot more information about your current software and hardware configuration before coming to the conclusion that the only possible way forward is with a significant architectural change to the db engine itself. Not only is it not at all clear that you are extracting maximum performance from your current hardware and software, but I doubt anyone is particularly interested in doing a bunch of development purely to game a benchmark. There has been significant discussion of the necessity and viability of the feature you are requesting in the past, so you should probably start where those discussions left off rather than starting the discussion all over again from the beginning. Of course, if vmware were to sponsor development of the feature in question, it probably wouldn't require nearly as much buy-in from the wider community. Getting back to the current performance issues - I have little doubt that the MS SQL benchmark was set up and run by people who were intimately familiar with MS SQL performance tuning. You stated in your earlier email that your team doesn't have significant postgresql-specific experience, so it isn't necessarily surprising that your first attempt at tuning didn't get the results that you are looking for. You stated that you have 14 SSDs and 90 spinning drives, but you don't specify how they are combined and how the database is laid out on top of them. There is no mention of how much memory is available to the system. We don't know how you've configured postgresql's memory allocation or how your config weights the relative costs of index lookups, sequential scans, etc. The guidelines for this mailing list include instructions for what information should be provided when asking about performance improvements. http://archives.postgresql.org/pgsql-performance/ Let's start by ascertaining how your benchmark results can be improved without engaging in a significant development effort on the db engine itself.",
"msg_date": "Thu, 5 Jul 2012 13:37:40 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V\n performance"
},
{
"msg_contents": "On Thu, Jul 5, 2012 at 1:37 PM, Reza Taheri <[email protected]> wrote:\n\n>\n> I provided more config details in an earlier email.****\n>\n> ** **\n>\n>\n>\nI hate to disagree, but unless I didn't get a message sent to the list, you\nhaven't provided any details about your postgresql config or otherwise\nadhered to the guidelines for starting a discussion of a performance\nproblem around here. I just searched my mailbox and no email from you has\nany such details. Several people have asked for them, including myself.\n You say you will give any details we want, but this is at least the 3rd or\n4th request for such details and they have not yet been forthcoming.\n\nOn Thu, Jul 5, 2012 at 1:37 PM, Reza Taheri <[email protected]> wrote:\n\nI provided more config details in an earlier email. \nI hate to disagree, but unless I didn't get a message sent to the list, you haven't provided any details about your postgresql config or otherwise adhered to the guidelines for starting a discussion of a performance problem around here. I just searched my mailbox and no email from you has any such details. Several people have asked for them, including myself. You say you will give any details we want, but this is at least the 3rd or 4th request for such details and they have not yet been forthcoming.",
"msg_date": "Thu, 5 Jul 2012 13:52:21 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "On 07/05/2012 03:52 PM, Samuel Gendler wrote:\n>\n>\n> On Thu, Jul 5, 2012 at 1:37 PM, Reza Taheri <[email protected] <mailto:[email protected]>> wrote:\n>\n>\n> I provided more config details in an earlier email.____\n>\n> __ __\n>\n>\n>\n> I hate to disagree, but unless I didn't get a message sent to the list, you haven't provided any details about your postgresql config or otherwise adhered to the guidelines for starting a discussion of a performance problem around here. I just searched my mailbox and no email from you has any such details. Several people have asked for them, including myself. You say you will give any details we want, but this is at least the 3rd or 4th request for such details and they have not yet been forthcoming.\n\n\nReza, I went back and looked myself. I see no specs on OS, or hardware.... unless you mean this:\n\n\n> http://bit.ly/QeWXhE. This was run on a similar server, and the database size is close to ours.\n\n\nYou're running on windows then? Server is 96Gig ram, 8 cores, (dell poweredge T610).\nwith two powervault MD1120 NAS's?\n\nBut then I assume you were not running on that, were you. You were running vmware on it, probably?\n\n\n-Andy\n",
"msg_date": "Thu, 05 Jul 2012 19:41:40 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "First: Please do try 9.2 beta if you're upgrading from 8.4. It'll be out \nas a final release soon enough, and index only scans may make a /big/ \ndifference for the problem you're currently having.\n\nLooking at your configuration I have a few comments, but it's worth \nnoting that I don't work with hardware at that scale, and I'm more used \nto tuning I/O bottlenecked systems with onboard storage rather than \nCPU-bottlenecked ones on big SANs. Hopefully now that you've posted your \nconfiguration and setup there might be interest from others.\n\nIf you're able to post an EXPLAIN ANALYZE or two for a query you feel is \nslow that certainly won't hurt. Using http://explain.depesz.com/ saves \nyou the hassle of dealing with word-wrapping when posting them, btw.\n\nAs for your config:\n\nI notice that your autovacuum settings are at their defaults. With heavy \nUPDATE / DELETE load this'll tend to lead to table and index bloat, so \nthe DB has to scan more useless data to get what it needs. It also means \ntable stats won't be maintained as well, potentially leading to poor \nplanner decisions. The following fairly scary query can help identify \nbloat, as the database server doesn't currently have anything much built \nin to help you spot such issues:\n\nhttp://wiki.postgresql.org/wiki/Show_database_bloat\n\nIt might be helpful to set effective_cache_size and \neffective_io_concurrency so Pg has more idea of the scale of your \nhardware. The defaults are very conservative - it's supposed to be easy \nfor people to use for simple things without melting their systems, and \nit's expected that anyone doing bigger work will tune the database.\n\nhttp://www.postgresql.org/docs/9.1/static/runtime-config-resource.html\n\nIt looks like you've already tweaked many of the critical points for big \ninstalls - your checkpoint_segments, wal_buffers, shared_buffers, etc. I \nlack the big hardware experience to know if they're appropriate, but \nthey're not the extremely conservative defaults, which is a start.\n\nYour random_page_cost and seq_page_cost are probably dead wrong for a \nSAN with RAM and SSD cache in front of fast disks. Their defaults are \nfor local uncached spinning HDD media where seeks are expensive. The \ntypical advice on such hardware is to set them to something more like \nseq_page_cost = 0.1 random_page_cost = 0.15 - ie cheaper relative to \nthe cpu cost, and with random I/O only a little more expensive than \nsequential I/O. What's right for your situation varies a bit based on DB \nsize vs hardware size, etc; Greg discusses this more in his book.\n\nWhat isolation level do your transactions use? This is significant \nbecause of the move to true serializable isolation with predicate \nlocking in 9.0; it made serializable transactions a bit slower in some \ncircumstances in exchange for much stronger correctness guarantees. The \nREAD COMMITTED default was unchanged.\n\n\n\nIt also looks like you might not have seen the second part of my earlier \nreply:\n\n\nwhile PostgreSQL doesn't support covering indexes or clustered indexes \nat this point, 9.2 has added support for index-only scans, which are a \nhalf-way point of sorts. See:\n\nhttp://rhaas.blogspot.com.au/2011/10/index-only-scans-weve-got-em.html\nhttp://rhaas.blogspot.com.au/2010/11/index-only-scans.html\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a2822fb9337a21f98ac4ce850bb4145acf47ca27\n\nIf at all possible please see how your test is affected by this \nPostgreSQL 9.2 enhancement. It should make a big difference, and if it \ndoesn't it's important to know why.\n\n(CC'd Robert Haas)\n\n\n\nAs for the \"invest\" side - that's really a matter for EnterpriseDB, \nCommand Prompt, Red Hat, and the other backers who're employing people \nto work on the DB. Consider asking on pgsql-hackers, too; if nothing \nelse you'll get a good explanation of the current state and progress \ntoward clustered indexes.\n\nSome links that may be useful to you are:\n\nhttp://wiki.postgresql.org/wiki/Todo\n Things that it'd be good to support/implement at some point. \nSurprisingly, covering/clustered indexes aren't on there or at least \naren't easily found. It's certainly a much-desired feature despite its \napparent absence from the TODO.\n\nhttp://wiki.postgresql.org/wiki/PostgreSQL_9.2_Development_Plan\nhttp://wiki.postgresql.org/wiki/PostgreSQL_9.2_Open_Items\n\n--\nCraig Ringer\n\n\n\n\n\n\n\n\n\n\nFirst: Please do try 9.2 beta if you're upgrading from 8.4.\n It'll be out as a final release soon enough, and index only scans\n may make a big difference for the problem you're currently\n having.\n\n Looking at your configuration I have a few comments, but it's\n worth noting that I don't work with hardware at that scale, and\n I'm more used to tuning I/O bottlenecked systems with onboard\n storage rather than CPU-bottlenecked ones on big SANs. Hopefully\n now that you've posted your configuration and setup there might be\n interest from others.\n\n If you're able to post an EXPLAIN ANALYZE or two for a query you\n feel is slow that certainly won't hurt. Using\n \nhttp://explain.depesz.com/\n saves you the hassle of dealing with word-wrapping when posting\n them, btw.\n\n As for your config:\n\n I notice that your autovacuum settings are at their defaults. With\n heavy UPDATE / DELETE load this'll tend to lead to table and index\n bloat, so the DB has to scan more useless data to get what it\n needs. It also means table stats won't be maintained as well,\n potentially leading to poor planner decisions. The following\n fairly scary query can help identify bloat, as the database server\n doesn't currently have anything much built in to help you spot\n such issues:\n\n \n \nhttp://wiki.postgresql.org/wiki/Show_database_bloat\n\n It might be helpful to set effective_cache_size and\n effective_io_concurrency so Pg has more idea of the scale of your\n hardware. The defaults are very conservative - it's supposed to be\n easy for people to use for simple things without melting their\n systems, and it's expected that anyone doing bigger work will tune\n the database.\n\n\nhttp://www.postgresql.org/docs/9.1/static/runtime-config-resource.html\n\n It looks like you've already tweaked many of the critical points\n for big installs - your checkpoint_segments, wal_buffers,\n shared_buffers, etc. I lack the big hardware experience to know if\n they're appropriate, but they're not the extremely conservative\n defaults, which is a start.\n\n Your random_page_cost and seq_page_cost are probably dead wrong\n for a SAN with RAM and SSD cache in front of fast disks. Their\n defaults are for local uncached spinning HDD media where seeks are\n expensive. The typical advice on such hardware is to set them to\n something more like seq_page_cost = 0.1 random_page_cost = 0.15\n - ie cheaper relative to the cpu cost, and with random I/O only a\n little more expensive than sequential I/O. What's right for your\n situation varies a bit based on DB size vs hardware size, etc;\n Greg discusses this more in his book.\n\n What isolation level do your transactions use? This is significant\n because of the move to true serializable isolation with predicate\n locking in 9.0; it made serializable transactions a bit slower in\n some circumstances in exchange for much stronger correctness\n guarantees. The READ COMMITTED default was unchanged.\n\n\n\n It also looks like you might not have seen the second part of my\n earlier reply:\n\n\n while PostgreSQL doesn't support covering indexes or clustered\n indexes at this point, 9.2 has added support for index-only\n scans, which are a half-way point of sorts. See:\n\n http://rhaas.blogspot.com.au/2011/10/index-only-scans-weve-got-em.html\n http://rhaas.blogspot.com.au/2010/11/index-only-scans.html\n http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a2822fb9337a21f98ac4ce850bb4145acf47ca27\n\n If at all possible please see how your test is affected by\n this PostgreSQL 9.2 enhancement. It should make a big\n difference, and if it doesn't it's important to know why. \n\n (CC'd Robert Haas)\n\n\n\n As for the \"invest\" side - that's really a matter for\n EnterpriseDB, Command Prompt, Red Hat, and the other backers\n who're employing people to work on the DB. Consider asking on\n pgsql-hackers, too; if nothing else you'll get a good\n explanation of the current state and progress toward clustered\n indexes.\n\n Some links that may be useful to you are:\n\n http://wiki.postgresql.org/wiki/Todo\n Things that it'd be good to support/implement at some point.\n Surprisingly, covering/clustered indexes aren't on there or at\n least aren't easily found. It's certainly a much-desired\n feature despite its apparent absence from the TODO.\n\n http://wiki.postgresql.org/wiki/PostgreSQL_9.2_Development_Plan\n http://wiki.postgresql.org/wiki/PostgreSQL_9.2_Open_Items\n\n --\n Craig Ringer",
"msg_date": "Fri, 06 Jul 2012 08:46:11 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "On 07/06/2012 04:52 AM, Samuel Gendler wrote:\n>\n>\n> On Thu, Jul 5, 2012 at 1:37 PM, Reza Taheri <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n> I provided more config details in an earlier email.\n>\n>\n>\n> I hate to disagree, but unless I didn't get a message sent to the list\n\nIt looks like that might be the case. I got a message with Message-ID \n66CE997FB523C04E9749452273184C6C137CB88CDD@exch-mbx-113.vmware.com sent \nat Thu, 5 Jul 2012 11:33:46 -0700 that contained the basic info, \npostgresql.conf, etc. Belated, but it was sent. I can't find this \nmessage in the archives and the copy I got came direct to me via cc, so \nI suspect our friendly mailing list system has silently held it for \nmoderation due to size/attachment.\n\n\nI'll reproduce the content below, followed by an inline copy of the \npostgresql.conf with only changed lines:\n\nOn 07/06/2012 02:33 AM, Reza Taheri wrote:\n>\n> OK, some config details.\n>\n> We are using:\n>\n> �Two blades of an HP BladeSystem c-Class c7000 with 2-socket Intel \n> E5520 (Nehalem-EP) processors and 48GB of memory per blade\n>\n> o8 cores, 16 threads per blade\n>\n> o48GB of RAM per blade\n>\n> �Storage was an EMC VNX5700 with 14 SSDs fronting 32 15K RPM drives\n>\n> �The Tier B database VM was alone on a blade with 16 vCPUs, 40GB of \n> memory, 4 virtual drives with various RAID levels\n>\n> �The driver and Tier A VMs were on the second blade\n>\n> oSo we set PGHOST on the client system to point to the server\n>\n> �RHEL 6.1\n>\n> �PostgreSQL 8.4\n>\n> �unixODBC 2.3.2\n>\n> We stuck with PGSQL 8.4 since it is the stock version shipped with \n> RHEL 6. I am building a new, larger testbed, and will switch to PGSQL \n> 9 with that.\n>\n>\n\npostgresql.conf:\n\n[craig@ayaki ~]$ egrep -v '(^\\s*#)|(^\\s*$)' /tmp/postgresql2.conf | cut \n-d '#' -f 1\nlisten_addresses = '*'\nmax_connections = 320\nshared_buffers = 28GB\ntemp_buffers = 200MB\nwork_mem = 10MB\nmaintenance_work_mem = 10MB\nbgwriter_delay = 10ms\nbgwriter_lru_maxpages = 20\nwal_buffers = 16MB\ncheckpoint_segments = 128\ncheckpoint_timeout = 30min\ncheckpoint_completion_target = 0.9\ndefault_statistics_target = 10000\nlogging_collector = on\nlog_directory = 'pg_log'\nlog_filename = 'postgresql-%a.log'\nlog_truncate_on_rotation = on\nlog_rotation_age = 1d\nlog_rotation_size = 0\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\ndefault_text_search_config = 'pg_catalog.english'\n\n\n\n\n\n\n\n\nOn 07/06/2012 04:52 AM, Samuel Gendler\n wrote:\n\n\n\n\n\nOn Thu, Jul 5, 2012 at 1:37 PM, Reza Taheri <[email protected]>\n wrote:\n\n\n\n \n \nI provided more config details in an earlier\n email. \n \n \n \n\n\n\n\n\nI hate to disagree, but unless I didn't get a message sent\n to the list\n\n\n\n It looks like that might be the case. I got a message with\n Message-ID\n 66CE997FB523C04E9749452273184C6C137CB88CDD@exch-mbx-113.vmware.com\n sent at Thu, 5 Jul 2012 11:33:46 -0700 that contained the basic\n info, postgresql.conf, etc. Belated, but it was sent. I can't find\n this message in the archives and the copy I got came direct to me\n via cc, so I suspect our friendly mailing list system has silently\n held it for moderation due to size/attachment.\n\n\n I'll reproduce the content below, followed by an inline copy of the\n postgresql.conf with only changed lines:\n\n On 07/06/2012 02:33 AM, Reza Taheri wrote:\n\nOK, some config details. \nWe are using: \n \n· Two\n blades of an HP BladeSystem c-Class c7000 with 2-socket Intel\n E5520 (Nehalem-EP) processors and 48GB of memory per blade \no 8 cores, 16\n threads per blade \no 48GB of RAM\n per blade \n· Storage\n was an EMC VNX5700 with 14 SSDs fronting 32 15K RPM drives \n· The\n Tier B database VM was alone on a blade with 16 vCPUs, 40GB of\n memory, 4 virtual drives with various RAID levels \n· The\n driver and Tier A VMs were on the second blade \no So we set\n PGHOST on the client system to point to the server \n· RHEL\n 6.1 \n· PostgreSQL\n 8.4 \n· unixODBC\n 2.3.2 \n \nWe stuck with PGSQL 8.4 since it is the stock version\n shipped with RHEL 6. I am building a new, larger testbed, and\n will switch to PGSQL 9 with that. \n \n\n\n\n postgresql.conf:\n\n [craig@ayaki ~]$ egrep -v '(^\\s*#)|(^\\s*$)' /tmp/postgresql2.conf |\n cut -d '#' -f 1 \n listen_addresses = '*' \n max_connections = 320 \n shared_buffers = 28GB \n temp_buffers = 200MB \n work_mem = 10MB \n maintenance_work_mem = 10MB \n bgwriter_delay = 10ms \n bgwriter_lru_maxpages = 20 \n wal_buffers = 16MB \n checkpoint_segments = 128 \n checkpoint_timeout = 30min \n checkpoint_completion_target = 0.9 \n default_statistics_target = 10000 \n logging_collector = on \n log_directory = 'pg_log' \n log_filename = 'postgresql-%a.log' \n log_truncate_on_rotation = on \n log_rotation_age = 1d \n log_rotation_size = 0 \n datestyle = 'iso, mdy'\n lc_messages = 'en_US.UTF-8' \n lc_monetary = 'en_US.UTF-8' \n lc_numeric = 'en_US.UTF-8' \n lc_time = 'en_US.UTF-8' \n default_text_search_config = 'pg_catalog.english'",
"msg_date": "Fri, 06 Jul 2012 08:57:40 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "Well, I keep failing to send an email with an attachment. Do I need a moderator's approval?\n\nYes, running on VMs and a lower bin processor. With the virtualization overhead, etc., I figure we would be running right around 2/3 of the Dell throughput if we were running the same DBMS.\n\nI sent the following message twice today with attachments (postgresql.conf, etc.), and it hasn't been posted yet. Here it is without an attachment.\n\n****************************************************\n\nFrom: Reza Taheri \nSent: Thursday, July 05, 2012 11:34 AM\nTo: 'Craig Ringer'\nCc: [email protected]; Robert Haas\nSubject: RE: [PERFORM] The need for clustered indexes to boost TPC-V performance\n\nOK, some config details.\nWe are using:\n\n*\tTwo blades of an HP BladeSystem c-Class c7000 with 2-socket Intel E5520 (Nehalem-EP) processors and 48GB of memory per blade\no\t8 cores, 16 threads per blade\no\t48GB of RAM per blade\n*\tStorage was an EMC VNX5700 with 14 SSDs fronting 32 15K RPM drives\n*\tThe Tier B database VM was alone on a blade with 16 vCPUs, 40GB of memory, 4 virtual drives with various RAID levels\n*\tThe driver and Tier A VMs were on the second blade\no\tSo we set PGHOST on the client system to point to the server\n*\tRHEL 6.1\n*\tPostgreSQL 8.4\n*\tunixODBC 2.3.2\n\nWe stuck with PGSQL 8.4 since it is the stock version shipped with RHEL 6. I am building a new, larger testbed, and will switch to PGSQL 9 with that.\n\nPostgres.conf is attached.\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: Andy Colson [mailto:[email protected]]\n> Sent: Thursday, July 05, 2012 5:42 PM\n> To: Samuel Gendler\n> Cc: Reza Taheri; Robert Klemme; [email protected]\n> Subject: Re: [PERFORM] The need for clustered indexes to boost TPC-V\n> performance\n> \n> On 07/05/2012 03:52 PM, Samuel Gendler wrote:\n> >\n> >\n> > On Thu, Jul 5, 2012 at 1:37 PM, Reza Taheri <[email protected]\n> <mailto:[email protected]>> wrote:\n> >\n> >\n> > I provided more config details in an earlier email.____\n> >\n> > __ __\n> >\n> >\n> >\n> > I hate to disagree, but unless I didn't get a message sent to the list, you\n> haven't provided any details about your postgresql config or otherwise\n> adhered to the guidelines for starting a discussion of a performance\n> problem around here. I just searched my mailbox and no email from you\n> has any such details. Several people have asked for them, including myself.\n> You say you will give any details we want, but this is at least the 3rd or 4th\n> request for such details and they have not yet been forthcoming.\n> \n> \n> Reza, I went back and looked myself. I see no specs on OS, or hardware....\n> unless you mean this:\n> \n> \n> > http://bit.ly/QeWXhE. This was run on a similar server, and the database\n> size is close to ours.\n> \n> \n> You're running on windows then? Server is 96Gig ram, 8 cores, (dell\n> poweredge T610).\n> with two powervault MD1120 NAS's?\n> \n> But then I assume you were not running on that, were you. You were\n> running vmware on it, probably?\n> \n> \n> -Andy\n",
"msg_date": "Thu, 5 Jul 2012 18:00:41 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V\n performance"
},
{
"msg_contents": "On 07/06/2012 08:41 AM, Andy Colson wrote:\n\n> You're running on windows then? Server is 96Gig ram, 8 cores, (dell \n> poweredge T610).\n> with two powervault MD1120 NAS's?\n\nThankfully they're running Pg on Linux (RHEL 6) . It seems that tests to \ndate have been run against 8.4 which is pretty elderly, but hopefully \nit'll be brought up to 9.1 or 9.2beta soon.\n\nWhile the original poster should've given a reasonable amount of \ninformation to start with when asking performance questions - as per the \nmailing list guidance and plain common sense - more info /was/ sent \nlater on /but the lists.postgresql.org mailman ate it /- or held it for \nmoderation, anyway. The OP can't be blamed when Pg's mailing list \nmanager eats mesages with attachments! Also, remember that not everyone \nuses community mailing lists regularly; it takes a little learning to \nget used to keeping track of conversations, to inline reply style, etc.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/06/2012 08:41 AM, Andy Colson\n wrote:\n\n\nYou're\n running on windows then? Server is 96Gig ram, 8 cores, (dell\n poweredge T610).\n \n with two powervault MD1120 NAS's?\n \n\n\n Thankfully they're running Pg on Linux (RHEL 6) . It seems that\n tests to date have been run against 8.4 which is pretty elderly, but\n hopefully it'll be brought up to 9.1 or 9.2beta soon.\n\n While the original poster should've given a reasonable amount of\n information to start with when asking performance questions - as per\n the mailing list guidance and plain common sense - more info was\n sent later on but the lists.postgresql.org mailman ate it -\n or held it for moderation, anyway. The OP can't be blamed when Pg's\n mailing list manager eats mesages with attachments! Also, remember\n that not everyone uses community mailing lists regularly; it takes a\n little learning to get used to keeping track of conversations, to\n inline reply style, etc.\n\n --\n Craig Ringer",
"msg_date": "Fri, 06 Jul 2012 09:04:20 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "On 07/03/2012 07:13 PM, Reza Taheri wrote:\n> Is the PGSQL community willing to invest in a feature that a) has been\n> requested by many others already; and b) can make a huge difference in a\n> benchmark that can lend substantial credibility to PGSQL performance?\n\nLarger PostgreSQL features usually get built because companies sponsor \ntheir development, they pass review as both useful & correct, and then \nget committed. Asking the community to invest in a new feature isn't \nquite the right concept. Yes, everyone would like one of the smaller \nindex representations. I'm sure we can find reviewers willing to look \nat such a feature and committers who would also be interested enough to \ncommit it, on a volunteer basis. But a feature this size isn't going to \nspring to life based just on volunteer work. The most useful questions \nwould be \"who would be capable of writing that feature?\" and \"how can we \nget them sponsored to focus on it?\" I can tell from your comments yet \nwhat role(s) in that process VMWare wants to take on internally, and \nwhich it's looking for help with. The job of convincing people it's a \nuseful feature isn't necessary--we know that's true.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n",
"msg_date": "Thu, 05 Jul 2012 21:42:19 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "On 07/06/2012 09:33 AM, Samuel Gendler wrote:\n>\n> Some other potential issues - with only 10MB of work_mem, you might be \n> gong to temp space on disk more than you realize. Explain analyze \n> might reveal that, but only if you happen to pick a query that exceeds \n> work_mem on at least one step.\nRather than hunting blindly with EXPLAIN ANALYZE it's better to just \nturn log_temp_files on and see what's reported.\n\n--\nCraig Ringer\n\n\n\n\n\n\n\n\nOn 07/06/2012 09:33 AM, Samuel Gendler\n wrote:\n\n\n\n\n\n\nSome other potential issues - with only 10MB of work_mem,\n you might be gong to temp space on disk more than you realize.\n Explain analyze might reveal that, but only if you happen to\n pick a query that exceeds work_mem on at least one step.\n\n\n Rather than hunting blindly with EXPLAIN ANALYZE it's better to just\n turn log_temp_files on and see what's reported.\n\n --\n Craig Ringer",
"msg_date": "Fri, 06 Jul 2012 11:02:34 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] The need for clustered indexes to boost TPC-V\n\tperformance"
},
{
"msg_contents": "On 07/06/2012 09:00 AM, Reza Taheri wrote:\n> Well, I keep failing to send an email with an attachment. Do I need a moderator's approval?\n\nProbably. If so, it's really annoying that mailman isn't telling you \nthis via a \"held for moderation\" auto-reply. It should be.\n\n> We stuck with PGSQL 8.4 since it is the stock version shipped with RHEL 6. I am building a new, larger testbed, and will switch to PGSQL 9 with that.\n\nJust so you know, as per PostgreSQL versioning policy major releases are \nx.y, eg \"8.4\", \"9.0\" and \"9.1\" are distinct major releases.\n\nhttp://www.postgresql.org/support/versioning/\n\nI've always found that pretty odd and wish major releases would just \nincrement the first version part, but the policy states how it's being \ndone. It's important to realize this when you're talking about Pg \nreleases, because 8.4, 9.0, 9.1 and 9.2 are distinct releases with \ndifferent feature sets, so \"postgresql 9\" doesn't mean much.\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Fri, 06 Jul 2012 11:08:16 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "Just to be clear, we have a number of people from different companies working on the kit. This is not a VMware project, it is a TPC project. But I hear you regarding coming in from the cold and asking for a major db engine feature. I know that I have caused a lot of rolling eyes. Believe me, I have had the same (no, worse!) reaction from every one of the commercial database companies in response to similar requests over the past 25 years.\n\nWe have our skin in the game, and as long as the community values the benchmark and wants to support us, we will figure out the details as we go forward.\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: Greg Smith [mailto:[email protected]]\n> Sent: Thursday, July 05, 2012 6:42 PM\n> To: Reza Taheri\n> Cc: [email protected]; Andy Bond ([email protected]);\n> Greg Kopczynski; Jignesh Shah\n> Subject: Re: [PERFORM] The need for clustered indexes to boost TPC-V\n> performance\n> \n> On 07/03/2012 07:13 PM, Reza Taheri wrote:\n> > Is the PGSQL community willing to invest in a feature that a) has been\n> > requested by many others already; and b) can make a huge difference in\n> > a benchmark that can lend substantial credibility to PGSQL performance?\n> \n> Larger PostgreSQL features usually get built because companies sponsor\n> their development, they pass review as both useful & correct, and then get\n> committed. Asking the community to invest in a new feature isn't quite the\n> right concept. Yes, everyone would like one of the smaller index\n> representations. I'm sure we can find reviewers willing to look at such a\n> feature and committers who would also be interested enough to commit it,\n> on a volunteer basis. But a feature this size isn't going to spring to life based\n> just on volunteer work. The most useful questions would be \"who would\n> be capable of writing that feature?\" and \"how can we get them sponsored\n> to focus on it?\" I can tell from your comments yet what role(s) in that\n> process VMWare wants to take on internally, and which it's looking for help\n> with. The job of convincing people it's a useful feature isn't necessary--we\n> know that's true.\n> \n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n",
"msg_date": "Thu, 5 Jul 2012 20:33:13 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V\n performance"
},
{
"msg_contents": "Hi Craig,\nI used the tool at depesz.com extensively during our early prototyping. It helped uncover ~10 problems that we solved by fixing issues in the code, adding or changing indexes, etc. Right now, I believe all our query plans look like what I would expect.\n\nYes, you are right, I did miss the link to the index-only scans. From what I can tell, it will do exactly what we want, but only as long as the index has all the columns in the query. I don't know what percentage of our queries have this property. But it does help.\n\nThe two main kit developers are out this week. We'll put our heads together next week to see what version to use when I switch to a larger testbed I am preparing.\n\nThanks,\nReza\n\nFrom: Craig Ringer [mailto:[email protected]]\nSent: Thursday, July 05, 2012 5:46 PM\nTo: Reza Taheri\nCc: [email protected]\nSubject: Re: [PERFORM] The need for clustered indexes to boost TPC-V performance\n\nFirst: Please do try 9.2 beta if you're upgrading from 8.4. It'll be out as a final release soon enough, and index only scans may make a big difference for the problem you're currently having.\n\nLooking at your configuration I have a few comments, but it's worth noting that I don't work with hardware at that scale, and I'm more used to tuning I/O bottlenecked systems with onboard storage rather than CPU-bottlenecked ones on big SANs. Hopefully now that you've posted your configuration and setup there might be interest from others.\n\nIf you're able to post an EXPLAIN ANALYZE or two for a query you feel is slow that certainly won't hurt. Using http://explain.depesz.com/ saves you the hassle of dealing with word-wrapping when posting them, btw.\n\nAs for your config:\n\nI notice that your autovacuum settings are at their defaults. With heavy UPDATE / DELETE load this'll tend to lead to table and index bloat, so the DB has to scan more useless data to get what it needs. It also means table stats won't be maintained as well, potentially leading to poor planner decisions. The following fairly scary query can help identify bloat, as the database server doesn't currently have anything much built in to help you spot such issues:\n\n http://wiki.postgresql.org/wiki/Show_database_bloat\n\nIt might be helpful to set effective_cache_size and effective_io_concurrency so Pg has more idea of the scale of your hardware. The defaults are very conservative - it's supposed to be easy for people to use for simple things without melting their systems, and it's expected that anyone doing bigger work will tune the database.\n\nhttp://www.postgresql.org/docs/9.1/static/runtime-config-resource.html\n\nIt looks like you've already tweaked many of the critical points for big installs - your checkpoint_segments, wal_buffers, shared_buffers, etc. I lack the big hardware experience to know if they're appropriate, but they're not the extremely conservative defaults, which is a start.\n\nYour random_page_cost and seq_page_cost are probably dead wrong for a SAN with RAM and SSD cache in front of fast disks. Their defaults are for local uncached spinning HDD media where seeks are expensive. The typical advice on such hardware is to set them to something more like seq_page_cost = 0.1 random_page_cost = 0.15 - ie cheaper relative to the cpu cost, and with random I/O only a little more expensive than sequential I/O. What's right for your situation varies a bit based on DB size vs hardware size, etc; Greg discusses this more in his book.\n\nWhat isolation level do your transactions use? This is significant because of the move to true serializable isolation with predicate locking in 9.0; it made serializable transactions a bit slower in some circumstances in exchange for much stronger correctness guarantees. The READ COMMITTED default was unchanged.\n\n\n\nIt also looks like you might not have seen the second part of my earlier reply:\n\nwhile PostgreSQL doesn't support covering indexes or clustered indexes at this point, 9.2 has added support for index-only scans, which are a half-way point of sorts. See:\n\n http://rhaas.blogspot.com.au/2011/10/index-only-scans-weve-got-em.html\n http://rhaas.blogspot.com.au/2010/11/index-only-scans.html\n http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a2822fb9337a21f98ac4ce850bb4145acf47ca27\n\nIf at all possible please see how your test is affected by this PostgreSQL 9.2 enhancement. It should make a big difference, and if it doesn't it's important to know why.\n\n(CC'd Robert Haas)\n\n\n\nAs for the \"invest\" side - that's really a matter for EnterpriseDB, Command Prompt, Red Hat, and the other backers who're employing people to work on the DB. Consider asking on pgsql-hackers, too; if nothing else you'll get a good explanation of the current state and progress toward clustered indexes.\n\nSome links that may be useful to you are:\n\n http://wiki.postgresql.org/wiki/Todo\n Things that it'd be good to support/implement at some point. Surprisingly, covering/clustered indexes aren't on there or at least aren't easily found. It's certainly a much-desired feature despite its apparent absence from the TODO.\n\n http://wiki.postgresql.org/wiki/PostgreSQL_9.2_Development_Plan\n http://wiki.postgresql.org/wiki/PostgreSQL_9.2_Open_Items\n\n--\nCraig Ringer\n\n\n\nHi Craig,I used the tool at depesz.com extensively during our early prototyping. It helped uncover ~10 problems that we solved by fixing issues in the code, adding or changing indexes, etc. Right now, I believe all our query plans look like what I would expect. Yes, you are right, I did miss the link to the index-only scans. From what I can tell, it will do exactly what we want, but only as long as the index has all the columns in the query. I don’t know what percentage of our queries have this property. But it does help. The two main kit developers are out this week. We’ll put our heads together next week to see what version to use when I switch to a larger testbed I am preparing. Thanks,Reza From: Craig Ringer [mailto:[email protected]] Sent: Thursday, July 05, 2012 5:46 PMTo: Reza TaheriCc: [email protected]: Re: [PERFORM] The need for clustered indexes to boost TPC-V performance First: Please do try 9.2 beta if you're upgrading from 8.4. It'll be out as a final release soon enough, and index only scans may make a big difference for the problem you're currently having.Looking at your configuration I have a few comments, but it's worth noting that I don't work with hardware at that scale, and I'm more used to tuning I/O bottlenecked systems with onboard storage rather than CPU-bottlenecked ones on big SANs. Hopefully now that you've posted your configuration and setup there might be interest from others.If you're able to post an EXPLAIN ANALYZE or two for a query you feel is slow that certainly won't hurt. Using http://explain.depesz.com/ saves you the hassle of dealing with word-wrapping when posting them, btw.As for your config:I notice that your autovacuum settings are at their defaults. With heavy UPDATE / DELETE load this'll tend to lead to table and index bloat, so the DB has to scan more useless data to get what it needs. It also means table stats won't be maintained as well, potentially leading to poor planner decisions. The following fairly scary query can help identify bloat, as the database server doesn't currently have anything much built in to help you spot such issues: http://wiki.postgresql.org/wiki/Show_database_bloatIt might be helpful to set effective_cache_size and effective_io_concurrency so Pg has more idea of the scale of your hardware. The defaults are very conservative - it's supposed to be easy for people to use for simple things without melting their systems, and it's expected that anyone doing bigger work will tune the database.http://www.postgresql.org/docs/9.1/static/runtime-config-resource.htmlIt looks like you've already tweaked many of the critical points for big installs - your checkpoint_segments, wal_buffers, shared_buffers, etc. I lack the big hardware experience to know if they're appropriate, but they're not the extremely conservative defaults, which is a start.Your random_page_cost and seq_page_cost are probably dead wrong for a SAN with RAM and SSD cache in front of fast disks. Their defaults are for local uncached spinning HDD media where seeks are expensive. The typical advice on such hardware is to set them to something more like seq_page_cost = 0.1 random_page_cost = 0.15 - ie cheaper relative to the cpu cost, and with random I/O only a little more expensive than sequential I/O. What's right for your situation varies a bit based on DB size vs hardware size, etc; Greg discusses this more in his book.What isolation level do your transactions use? This is significant because of the move to true serializable isolation with predicate locking in 9.0; it made serializable transactions a bit slower in some circumstances in exchange for much stronger correctness guarantees. The READ COMMITTED default was unchanged.It also looks like you might not have seen the second part of my earlier reply:while PostgreSQL doesn't support covering indexes or clustered indexes at this point, 9.2 has added support for index-only scans, which are a half-way point of sorts. See: http://rhaas.blogspot.com.au/2011/10/index-only-scans-weve-got-em.html http://rhaas.blogspot.com.au/2010/11/index-only-scans.html http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a2822fb9337a21f98ac4ce850bb4145acf47ca27If at all possible please see how your test is affected by this PostgreSQL 9.2 enhancement. It should make a big difference, and if it doesn't it's important to know why. (CC'd Robert Haas)As for the \"invest\" side - that's really a matter for EnterpriseDB, Command Prompt, Red Hat, and the other backers who're employing people to work on the DB. Consider asking on pgsql-hackers, too; if nothing else you'll get a good explanation of the current state and progress toward clustered indexes.Some links that may be useful to you are: http://wiki.postgresql.org/wiki/Todo Things that it'd be good to support/implement at some point. Surprisingly, covering/clustered indexes aren't on there or at least aren't easily found. It's certainly a much-desired feature despite its apparent absence from the TODO. http://wiki.postgresql.org/wiki/PostgreSQL_9.2_Development_Plan http://wiki.postgresql.org/wiki/PostgreSQL_9.2_Open_Items--Craig Ringer",
"msg_date": "Thu, 5 Jul 2012 21:19:32 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V\n performance"
},
{
"msg_contents": "On Thu, Jul 5, 2012 at 10:33 PM, Reza Taheri <[email protected]> wrote:\n> Just to be clear, we have a number of people from different companies working on the kit. This is not a VMware project, it is a TPC project. But I hear you regarding coming in from the cold and asking for a major db engine feature. I know that I have caused a lot of rolling eyes. Believe me, I have had the same (no, worse!) reaction from every one of the commercial database companies in response to similar requests over the past 25 years.\n\nNo rolling of eyes from me. Clustered indexes work and if your table\naccess mainly hits the table through that index you'll see enormous\nreductions in i/o. Index only scans naturally are a related\noptimization in the same vein. Denying that is just silly. BTW,\nputting postgres through a standard non trivial benchmark suite over\nreasonable hardware, reporting results, identifying bottlenecks, etc.\nis incredibly useful. Please keep it up, and don't be afraid to ask\nfor help here. (one thing I'd love to see is side by side results\ncomparing 8.4 to 9.1 to 9.2).\n\nmerlin\n",
"msg_date": "Tue, 10 Jul 2012 14:05:55 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V performance"
},
{
"msg_contents": "Hi Merlin,\nWe are moving up to a larger testbed, and are planning to use 9.2. But the results will not comparable to our 8.4 results due to differences in hardware. But that comparison is a useful one. I'll try for a quick test on the new hardware with 8.4 before moving to 9.2.\n\nThanks,\nReza\n\n> -----Original Message-----\n> From: Merlin Moncure [mailto:[email protected]]\n> Sent: Tuesday, July 10, 2012 12:06 PM\n> To: Reza Taheri\n> Cc: Greg Smith; [email protected]\n> Subject: Re: [PERFORM] The need for clustered indexes to boost TPC-V\n> performance\n> \n> On Thu, Jul 5, 2012 at 10:33 PM, Reza Taheri <[email protected]> wrote:\n> > Just to be clear, we have a number of people from different companies\n> working on the kit. This is not a VMware project, it is a TPC project. But I\n> hear you regarding coming in from the cold and asking for a major db engine\n> feature. I know that I have caused a lot of rolling eyes. Believe me, I have\n> had the same (no, worse!) reaction from every one of the commercial\n> database companies in response to similar requests over the past 25 years.\n> \n> No rolling of eyes from me. Clustered indexes work and if your table access\n> mainly hits the table through that index you'll see enormous reductions in\n> i/o. Index only scans naturally are a related optimization in the same vein.\n> Denying that is just silly. BTW, putting postgres through a standard non\n> trivial benchmark suite over reasonable hardware, reporting results,\n> identifying bottlenecks, etc.\n> is incredibly useful. Please keep it up, and don't be afraid to ask for help\n> here. (one thing I'd love to see is side by side results comparing 8.4 to 9.1 to\n> 9.2).\n> \n> merlin\n",
"msg_date": "Tue, 10 Jul 2012 12:35:19 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The need for clustered indexes to boost TPC-V\n performance"
}
] |
[
{
"msg_contents": "I want to implement a \"paged Query\" feature, where the user can enter in \na dialog, how much rows he want to see. After displaying the first page \nof rows, he can can push a button to display the next/previous page.\nOn database level I could user \"limit\" to implement this feature. My \nproblem now is, that the user is not permitted to view all rows. For \nevery row a permission check is performed and if permission is granted, \nthe row is added to the list of rows sent to the client.\nIf for example the user has entered a page size of 50 and I use \"limit \n50\" to only fetch 50 records, what should I do if he is only permitted \nto see 20 of these 50 records? There may be more records he can view.\nBut if I don't use \"limit\", what happens if the query would return \n5,000,000 rows? Would my result set contain 5,000,000 rows or would the \nperformance of the database go down?\n\nThanks in advance\nHermann\n",
"msg_date": "Wed, 04 Jul 2012 14:25:28 +0200",
"msg_from": "Hermann Matthes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Paged Query"
},
{
"msg_contents": "What language are you using? Usually there is iterator with chunked fetch\noption (like setFetchSize in java jdbc). So you are passing query without\nlimit and then read as many results as you need. Note that query plan in\nthis case won't be optimized for your limit and I don't remember if\npostgres has \"optimize for N rows\" statement option.\nAlso, if your statement is ordered by some key, you can use general paging\ntechnique when you rerun query with \"key>max_prev_value\" filter to get next\nchunk.\n\nСереда, 4 липня 2012 р. користувач Hermann Matthes <[email protected]>\nнаписав:\n> I want to implement a \"paged Query\" feature, where the user can enter in\na dialog, how much rows he want to see. After displaying the first page of\nrows, he can can push a button to display the next/previous page.\n> On database level I could user \"limit\" to implement this feature. My\nproblem now is, that the user is not permitted to view all rows. For every\nrow a permission check is performed and if permission is granted, the row\nis added to the list of rows sent to the client.\n> If for example the user has entered a page size of 50 and I use \"limit\n50\" to only fetch 50 records, what should I do if he is only permitted to\nsee 20 of these 50 records? There may be more records he can view.\n> But if I don't use \"limit\", what happens if the query would return\n5,000,000 rows? Would my result set contain 5,000,000 rows or would the\nperformance of the database go down?\n>\n> Thanks in advance\n> Hermann\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nWhat language are you using? Usually there is iterator with chunked fetch option (like setFetchSize in java jdbc). So you are passing query without limit and then read as many results as you need. Note that query plan in this case won't be optimized for your limit and I don't remember if postgres has \"optimize for N rows\" statement option.\nAlso, if your statement is ordered by some key, you can use general paging technique when you rerun query with \"key>max_prev_value\" filter to get next chunk.Середа, 4 липня 2012 р. користувач Hermann Matthes <[email protected]> написав:\n> I want to implement a \"paged Query\" feature, where the user can enter in a dialog, how much rows he want to see. After displaying the first page of rows, he can can push a button to display the next/previous page.\n> On database level I could user \"limit\" to implement this feature. My problem now is, that the user is not permitted to view all rows. For every row a permission check is performed and if permission is granted, the row is added to the list of rows sent to the client.\n> If for example the user has entered a page size of 50 and I use \"limit 50\" to only fetch 50 records, what should I do if he is only permitted to see 20 of these 50 records? There may be more records he can view.\n> But if I don't use \"limit\", what happens if the query would return 5,000,000 rows? Would my result set contain 5,000,000 rows or would the performance of the database go down?>> Thanks in advance\n> Hermann>> --> Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance\n>-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Fri, 6 Jul 2012 16:18:38 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "Hermann Matthes wrote:\n> I want to implement a \"paged Query\" feature, where the user can enter\nin\n> a dialog, how much rows he want to see. After displaying the first\npage\n> of rows, he can can push a button to display the next/previous page.\n> On database level I could user \"limit\" to implement this feature. My\n> problem now is, that the user is not permitted to view all rows. For\n> every row a permission check is performed and if permission is\ngranted,\n> the row is added to the list of rows sent to the client.\n> If for example the user has entered a page size of 50 and I use \"limit\n> 50\" to only fetch 50 records, what should I do if he is only permitted\n> to see 20 of these 50 records? There may be more records he can view.\n> But if I don't use \"limit\", what happens if the query would return\n> 5,000,000 rows? Would my result set contain 5,000,000 rows or would\nthe\n> performance of the database go down?\n\nSelecting all 5000000 rows would consume a lot of memory wherever\nthey are cached. Also, it might lead to bad response times (with\nan appropriate LIMIT clause, the server can choose a plan that\nreturns the first few rows quickly).\n\nI assume that there is some kind of ORDER BY involved, so that\nthe order of rows displayed is not random.\n\nI have two ideas:\n- Try to integrate the permission check in the query.\n It might be more efficient, and you could just use LIMIT\n and OFFSET like you intended.\n- Select some more rows than you want to display on one page,\n perform the permission checks. Stop when you reach the end\n or have enough rows. Remember the sort key of the last row\n processed.\n When the next page is to be displayed, use the remembered\n sort key value to SELECT the next rows.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Fri, 6 Jul 2012 15:19:15 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "On Wed, Jul 4, 2012 at 6:25 AM, Hermann Matthes <[email protected]>wrote:\n\n> I want to implement a \"paged Query\" feature, where the user can enter in a\n> dialog, how much rows he want to see. After displaying the first page of\n> rows, he can can push a button to display the next/previous page.\n> On database level I could user \"limit\" to implement this feature. My\n> problem now is, that the user is not permitted to view all rows. For every\n> row a permission check is performed and if permission is granted, the row\n> is added to the list of rows sent to the client.\n> If for example the user has entered a page size of 50 and I use \"limit 50\"\n> to only fetch 50 records, what should I do if he is only permitted to see\n> 20 of these 50 records? There may be more records he can view.\n> But if I don't use \"limit\", what happens if the query would return\n> 5,000,000 rows? Would my result set contain 5,000,000 rows or would the\n> performance of the database go down?\n>\n>\nSounds like your permission check is not implemented in the database. If\nit were, those records would be excluded and the OFFSET-LIMIT combo would\nbe your solution. Also appears that you have access to the application.\n If so, I would recommend implementing the permission check in the\ndatabase. Much cleaner from a query & pagination standpoint.\n\nAn alternative is to have the application complicate the query with the\nappropriate permission logic excluding the unviewable records from the\nfinal ORDER BY-OFFSET-LIMIT. This will give you an accurate page count.\n\nIMHO, the worst alternative is to select your max page size, exclude rows\nthe user cannot see, rinse and repeat until you have your records per page\nlimit. Whatever you're ordering on will serve as the page number. Issue\nwith this solution is you may not have an accurate page count.\n\nLuck.\n\n-Greg\n\nOn Wed, Jul 4, 2012 at 6:25 AM, Hermann Matthes <[email protected]> wrote:\n\nI want to implement a \"paged Query\" feature, where the user can enter in a dialog, how much rows he want to see. After displaying the first page of rows, he can can push a button to display the next/previous page.\n\n\nOn database level I could user \"limit\" to implement this feature. My problem now is, that the user is not permitted to view all rows. For every row a permission check is performed and if permission is granted, the row is added to the list of rows sent to the client.\n\n\nIf for example the user has entered a page size of 50 and I use \"limit 50\" to only fetch 50 records, what should I do if he is only permitted to see 20 of these 50 records? There may be more records he can view.\n\n\nBut if I don't use \"limit\", what happens if the query would return 5,000,000 rows? Would my result set contain 5,000,000 rows or would the performance of the database go down?\nSounds like your permission check is not implemented in the database. If it were, those records would be excluded and the OFFSET-LIMIT combo would be your solution. Also appears that you have access to the application. If so, I would recommend implementing the permission check in the database. Much cleaner from a query & pagination standpoint.\nAn alternative is to have the application complicate the query with the appropriate permission logic excluding the unviewable records from the final ORDER BY-OFFSET-LIMIT. This will give you an accurate page count.\nIMHO, the worst alternative is to select your max page size, exclude rows the user cannot see, rinse and repeat until you have your records per page limit. Whatever you're ordering on will serve as the page number. Issue with this solution is you may not have an accurate page count.\nLuck.-Greg",
"msg_date": "Fri, 6 Jul 2012 07:35:08 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "Hi Hermann,\n\nWell,\n\nNot clear how you get rows for user without paging?\n\nIf it is some query:\n\nSELECT columns FROM table WHERE UserHasPerimision(rowPK, userid)\n\nPaging would be:\n\nSELECT columns FROM table WHERE UserHasPerimision(rowPK, userid) LIMIT\nNoOfRecords OFFSET page*NoOfRecords\n\nKind Regards,\n\nMisa\n\n2012/7/4 Hermann Matthes <[email protected]>\n\n> I want to implement a \"paged Query\" feature, where the user can enter in a\n> dialog, how much rows he want to see. After displaying the first page of\n> rows, he can can push a button to display the next/previous page.\n> On database level I could user \"limit\" to implement this feature. My\n> problem now is, that the user is not permitted to view all rows. For every\n> row a permission check is performed and if permission is granted, the row\n> is added to the list of rows sent to the client.\n> If for example the user has entered a page size of 50 and I use \"limit 50\"\n> to only fetch 50 records, what should I do if he is only permitted to see\n> 20 of these 50 records? There may be more records he can view.\n> But if I don't use \"limit\", what happens if the query would return\n> 5,000,000 rows? Would my result set contain 5,000,000 rows or would the\n> performance of the database go down?\n>\n> Thanks in advance\n> Hermann\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nHi Hermann,Well,Not clear how you get rows for user without paging?If it is some query:SELECT columns FROM table WHERE UserHasPerimision(rowPK, userid)\nPaging would be:SELECT columns FROM table WHERE UserHasPerimision(rowPK, userid) LIMIT NoOfRecords OFFSET page*NoOfRecordsKind Regards,\nMisa2012/7/4 Hermann Matthes <[email protected]>\nI want to implement a \"paged Query\" feature, where the user can enter in a dialog, how much rows he want to see. After displaying the first page of rows, he can can push a button to display the next/previous page.\n\nOn database level I could user \"limit\" to implement this feature. My problem now is, that the user is not permitted to view all rows. For every row a permission check is performed and if permission is granted, the row is added to the list of rows sent to the client.\n\nIf for example the user has entered a page size of 50 and I use \"limit 50\" to only fetch 50 records, what should I do if he is only permitted to see 20 of these 50 records? There may be more records he can view.\n\nBut if I don't use \"limit\", what happens if the query would return 5,000,000 rows? Would my result set contain 5,000,000 rows or would the performance of the database go down?\n\nThanks in advance\nHermann\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 6 Jul 2012 15:43:58 +0200",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "Use cursors.\nBy far the most flexible. offset/limit have their down sides.\n",
"msg_date": "Mon, 9 Jul 2012 12:55:33 +0100",
"msg_from": "Gregg Jaskiewicz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "On 07/09/2012 07:55 PM, Gregg Jaskiewicz wrote:\n> Use cursors.\n> By far the most flexible. offset/limit have their down sides.\nDo do cursors.\n\nKeeping a cursor open across user think time has resource costs on the \ndatabase. It doesn't necessarily require keeping the transaction open \n(with hold cursors) but it's going to either require a snapshot to be \nretained or the whole query to be executed by the DB and stored somewhere.\n\nThen the user goes away on a week's holiday and leaves their PC at your \n\"next\" button.\n\nAll in all, limit/offset have better bounded and defined costs, albeit \nnot very nice ones.\n\n--\nCraig Ringer\n\n",
"msg_date": "Mon, 09 Jul 2012 20:02:38 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "On 07/09/2012 07:02 AM, Craig Ringer wrote:\n\n> Do do cursors.\n\nDid you mean \"Do not use cursors\" here?\n\n> Then the user goes away on a week's holiday and leaves their PC at\n> your \"next\" button.\n\nThis exactly. Cursors have limited functionality that isn't directly \ndisruptive to the database in general. At the very least, the \ntransaction ID reservation necessary to preserve a cursor long-term can \nwreak havoc on your transaction ID wraparound if you have a fairly busy \ndatabase. I can't think of a single situation where either client \ncaching or LIMIT/OFFSET can't supplant it with better risk levels and costs.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 9 Jul 2012 08:22:10 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "On Mon, Jul 9, 2012 at 6:22 AM, Shaun Thomas <[email protected]>wrote:\n\n> On 07/09/2012 07:02 AM, Craig Ringer wrote:\n>\n> Do do cursors.\n>>\n>\n> Did you mean \"Do not use cursors\" here?\n>\n> Then the user goes away on a week's holiday and leaves their PC at\n>> your \"next\" button.\n>>\n>\n> This exactly. Cursors have limited functionality that isn't directly\n> disruptive to the database in general. At the very least, the transaction\n> ID reservation necessary to preserve a cursor long-term can wreak havoc on\n> your transaction ID wraparound if you have a fairly busy database. I can't\n> think of a single situation where either client caching or LIMIT/OFFSET\n> can't supplant it with better risk levels and costs.\n>\n\nA good solution to this general problem is \"hitlists.\" I wrote about this\nconcept before:\n\nhttp://archives.postgresql.org/pgsql-performance/2010-05/msg00058.php\n\nCraig James (the other Craig)\n\n\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n> 312-444-8534\n> [email protected]\n>\n>\n>\n> ______________________________**________________\n>\n> See http://www.peak6.com/email_**disclaimer/<http://www.peak6.com/email_disclaimer/>for terms and conditions related to this email\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nOn Mon, Jul 9, 2012 at 6:22 AM, Shaun Thomas <[email protected]> wrote:\nOn 07/09/2012 07:02 AM, Craig Ringer wrote:\n\n\nDo do cursors.\n\n\nDid you mean \"Do not use cursors\" here?\n\n\nThen the user goes away on a week's holiday and leaves their PC at\nyour \"next\" button.\n\n\nThis exactly. Cursors have limited functionality that isn't directly disruptive to the database in general. At the very least, the transaction ID reservation necessary to preserve a cursor long-term can wreak havoc on your transaction ID wraparound if you have a fairly busy database. I can't think of a single situation where either client caching or LIMIT/OFFSET can't supplant it with better risk levels and costs.\nA good solution to this general problem is \"hitlists.\" I wrote about this concept before:http://archives.postgresql.org/pgsql-performance/2010-05/msg00058.php\nCraig James (the other Craig)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 9 Jul 2012 07:16:00 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "On Mon, Jul 9, 2012 at 8:16 AM, Craig James <[email protected]> wrote:\n\n>\n> A good solution to this general problem is \"hitlists.\" I wrote about this\n> concept before:\n>\n> http://archives.postgresql.org/pgsql-performance/2010-05/msg00058.php\n>\n>\nI implemented this exact strategy in our product years ago. Our queries\nwere once quite complicated involving many nested sub-SELECT's and several\nJOIN's per SELECT. The basics of our implementation now consists of\n\n 1. A table tracking all \"cache\" tables. A cache table is a permanent\ntable once represented as one of the former sub-SELECT's. The table\nincludes the MD5 hash of the query used to create the table, time created,\nquery type (helps to determine expire time), and a comment field to help in\ndebugging.\n 2. Simple logic checking for the existence of the cache table and creating\nit if it does not.\n 3. Using one or many of the named cache tables in the final query using\nORDER BY-LIMIT-OFFSET in a CURSOR.\n 4. One scheduled backend process to clear the \"expired\" cache tables based\non the query type.\n\nReason for the CURSOR is to execute once to get a tally of records for\npagination purposes then rewind and fetch the right \"page\".\n\nHighly recommended.\n\n-Greg\n\nOn Mon, Jul 9, 2012 at 8:16 AM, Craig James <[email protected]> wrote:\nA good solution to this general problem is \"hitlists.\" I wrote about this concept before:http://archives.postgresql.org/pgsql-performance/2010-05/msg00058.php\nI implemented this exact strategy in our product years ago. Our queries were once quite complicated involving many nested sub-SELECT's and several JOIN's per SELECT. The basics of our implementation now consists of\n 1. A table tracking all \"cache\" tables. A cache table is a permanent table once represented as one of the former sub-SELECT's. The table includes the MD5 hash of the query used to create the table, time created, query type (helps to determine expire time), and a comment field to help in debugging.\n 2. Simple logic checking for the existence of the cache table and creating it if it does not. 3. Using one or many of the named cache tables in the final query using ORDER BY-LIMIT-OFFSET in a CURSOR.\n 4. One scheduled backend process to clear the \"expired\" cache tables based on the query type.Reason for the CURSOR is to execute once to get a tally of records for pagination purposes then rewind and fetch the right \"page\".\nHighly recommended.-Greg",
"msg_date": "Mon, 9 Jul 2012 08:33:36 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "2012/7/9 Gregg Jaskiewicz <[email protected]>\n\n> Use cursors.\n> By far the most flexible. offset/limit have their down sides.\n>\n\n\nWell, I am not aware what down sides there are in LIMIT OFFSET what does\nnot exist in any other solutions for paged queries... But agree there\nalways must be some compromise between flexibility and response time (as\nlong user \"have\" impression he works \"immediatly\" so is query executed in\n1ms od 1s - not important...)\n\nQuery must be parsed and executed (inside DB, before returns results... -\nso this time is unavoidable) Cursors will ensure just to take (executed\nresults) 1 by 1 from DB,,, OK in Cursor scenario parse and Execute is done\njust once... But execution plans are cached - though I don't see big\ndownside if it is executed thousands times... you will notice in Pg that\nsecond query is much faster then 1st one...\n\nSo if you need to go straight forward form page 1 to page 576 (in\nsituations bellow 100 pages - 50 rows by page - no point to discuss\nperformance... You can get all rows from DB at once and do \"paging\" in\nclient side in memory) - I agree response will be a bit slower in\nLIMIT/OFFSET case, however not sure in CURSOR scenario it will be much\nfaster, to be more worth then many others limits of Cursors in General...\n(Personally I have not used them more then 7 years - Really don't see need\nfor them todays when hardware have more and more power...)\n\n From my experience users even very rare go to ending pages... easier to\nthem would be to sort data by field to get those rows in very first pages...\n\nKind Regards,\n\nMisa\n\n2012/7/9 Gregg Jaskiewicz <[email protected]>\nUse cursors.\nBy far the most flexible. offset/limit have their down sides.\nWell, I am not aware what down sides there are in LIMIT OFFSET what does not exist in any other solutions for paged queries... But agree there always must be some compromise between flexibility and response time (as long user \"have\" impression he works \"immediatly\" so is query executed in 1ms od 1s - not important...) \nQuery must be parsed and executed (inside DB, before returns results... - so this time is unavoidable) Cursors will ensure just to take (executed results) 1 by 1 from DB,,, OK in Cursor scenario parse and Execute is done just once... But execution plans are cached - though I don't see big downside if it is executed thousands times... you will notice in Pg that second query is much faster then 1st one...\nSo if you need to go straight forward form page 1 to page 576 (in situations bellow 100 pages - 50 rows by page - no point to discuss performance... You can get all rows from DB at once and do \"paging\" in client side in memory) - I agree response will be a bit slower in LIMIT/OFFSET case, however not sure in CURSOR scenario it will be much faster, to be more worth then many others limits of Cursors in General... (Personally I have not used them more then 7 years - Really don't see need for them todays when hardware have more and more power...)\nFrom my experience users even very rare go to ending pages... easier to them would be to sort data by field to get those rows in very first pages...Kind Regards,\nMisa",
"msg_date": "Mon, 9 Jul 2012 19:41:09 +0200",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "\nOn 07/09/2012 01:41 PM, Misa Simic wrote:\n>\n>\n> From my experience users even very rare go to ending pages... easier \n> to them would be to sort data by field to get those rows in very first \n> pages...\n>\n>\n\n\nYeah, the problem really is that most client code wants to know how many \npages there are, even if it only wants one page right now.\n\ncheers\n\nandrew\n",
"msg_date": "Mon, 09 Jul 2012 13:46:26 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "On Mon, Jul 9, 2012 at 1:46 PM, Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 07/09/2012 01:41 PM, Misa Simic wrote:\n>\n>>\n>>\n>> From my experience users even very rare go to ending pages... easier to\n>> them would be to sort data by field to get those rows in very first pages...\n>>\n>>\n>>\n>\n> Yeah, the problem really is that most client code wants to know how many\n> pages there are, even if it only wants one page right now.\n>\n\nFWIW, I wrote a little about getting the numbered results along with total\nresult count in one query[1]. The suggestions in comments to use CTE\nprovided even better performance.\n\n[1]\nhttp://gurjeet-tech.blogspot.com/2011/02/pagination-of-results-in-postgres.html<http://gurjeet-tech.blogspot.com/2011/02/pagination-of-results-in-postgres.html>\n\nBest regards,\n-- \nGurjeet Singh\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\n\nOn Mon, Jul 9, 2012 at 1:46 PM, Andrew Dunstan <[email protected]> wrote:\n\nOn 07/09/2012 01:41 PM, Misa Simic wrote:\n\n\n\n>From my experience users even very rare go to ending pages... easier to them would be to sort data by field to get those rows in very first pages...\n\n\n\n\n\nYeah, the problem really is that most client code wants to know how many pages there are, even if it only wants one page right now.FWIW, I wrote a little about getting the numbered results along with total result count in one query[1]. The suggestions in comments to use CTE provided even better performance.\n[1] http://gurjeet-tech.blogspot.com/2011/02/pagination-of-results-in-postgres.htmlBest regards,\n\n-- \nGurjeet Singh\nEnterpriseDB CorporationThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 9 Jul 2012 14:13:16 -0400",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "On 07/09/2012 09:22 PM, Shaun Thomas wrote:\n> On 07/09/2012 07:02 AM, Craig Ringer wrote:\n>\n>> Do do cursors.\n>\n> Did you mean \"Do not use cursors\" here?\n>\nOops. \"So do cursors\".\n>> Then the user goes away on a week's holiday and leaves their PC at\n>> your \"next\" button.\n>\n> This exactly. Cursors have limited functionality that isn't directly \n> disruptive to the database in general. At the very least, the \n> transaction ID reservation necessary to preserve a cursor long-term \n> can wreak havoc on your transaction ID wraparound if you have a fairly \n> busy database. I can't think of a single situation where either client \n> caching or LIMIT/OFFSET can't supplant it with better risk levels and \n> costs.\n>\nMy ideal is a cursor with timeout.\n\nIf I could use a cursor but know that the DB would automatically expire \nthe cursor and any associated resources after a certain inactivity \nperiod (_not_ total life, inactivity) that'd be great. Or, for that \nmatter, a cursor the DB could expire when it began to get in the way.\n\nI'm surprised more of the numerous tools that use LIMIT and OFFSET don't \ninstead use cursors that they hold for a short time, then drop if \nthere's no further activity and re-create next time there's interaction \nfrom the user. ORMs that tend to use big joins would particularly \nbenefit from doing this.\n\nI suspect the reason is that many tools - esp ORMs, web frameworks, etc \n- try to be portable between DBs, and cursors are a high-quirk-density \narea in SQL RDBMSs, not to mention unsupported by some DBs. Pity, though.\n\nThere's nothing wrong with using a cursor so long as you don't hang onto \nit over user think-time without also setting a timeout of some kind to \ndestroy it in the background.\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Tue, 10 Jul 2012 07:48:28 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "Понеділок, 9 липня 2012 р. користувач Misa Simic <[email protected]>\nнаписав:\n>\n>\n> 2012/7/9 Gregg Jaskiewicz <[email protected]>\n>>\n>> Use cursors.\n>> By far the most flexible. offset/limit have their down sides.\n>\n>\n> Well, I am not aware what down sides there are in LIMIT OFFSET what does\nnot exist in any other solutions for paged queries...\n\nwhere key > last-previous-key order by key\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nПонеділок, 9 липня 2012 р. користувач Misa Simic <[email protected]> написав:>>> 2012/7/9 Gregg Jaskiewicz <[email protected]>\n>>>> Use cursors.>> By far the most flexible. offset/limit have their down sides.>>> Well, I am not aware what down sides there are in LIMIT OFFSET what does not exist in any other solutions for paged queries... \nwhere key > last-previous-key order by key-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Wed, 11 Jul 2012 11:15:38 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "Понеділок, 9 липня 2012 р. користувач Misa Simic <[email protected]>\nнаписав:\n>\n>\n> 2012/7/9 Gregg Jaskiewicz <[email protected]>\n>>\n>> Use cursors.\n>> By far the most flexible. offset/limit have their down sides.\n>\n>\n> Well, I am not aware what down sides there are in LIMIT OFFSET what does\nnot exist in any other solutions for paged queries...\n\n'where key > last-value order by key limit N' is much better in performance\nfor large offsets.\np.s. Sorry for previous email- hit send too early.\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nПонеділок, 9 липня 2012 р. користувач Misa Simic <[email protected]> написав:>>> 2012/7/9 Gregg Jaskiewicz <[email protected]>\n>>>> Use cursors.>> By far the most flexible. offset/limit have their down sides.>>> Well, I am not aware what down sides there are in LIMIT OFFSET what does not exist in any other solutions for paged queries... \n'where key > last-value order by key limit N' is much better in performance for large offsets.p.s. Sorry for previous email- hit send too early.-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Wed, 11 Jul 2012 11:23:01 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
}
] |
[
{
"msg_contents": "A vendor has recommended the above drive to us - anyone have experience \nwith it or its predecessor Warpdrive?\n\nhttp://www.storagereview.com/lsi_warpdrive_2_lp_display_idf_2011\nhttp://www.storagereview.com/lsi_warpdrive_slp300_review\n\nThe specs look quite good, and the cards have capacitors on them - \nhowever I can't see any *specific* mention about poweroff safety (am \ngoing to follow that up directly myself).\n\nCheers\n\nMark\n",
"msg_date": "Fri, 06 Jul 2012 12:51:33 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSDs again, LSI Warpdrive 2 anyone?"
},
{
"msg_contents": "On 06/07/12 12:51, Mark Kirkwood wrote:\n> A vendor has recommended the above drive to us - anyone have \n> experience with it or its predecessor Warpdrive?\n>\n> http://www.storagereview.com/lsi_warpdrive_2_lp_display_idf_2011\n> http://www.storagereview.com/lsi_warpdrive_slp300_review\n>\n> The specs look quite good, and the cards have capacitors on them - \n> however I can't see any *specific* mention about poweroff safety (am \n> going to follow that up directly myself).\n\nSeems like the \"Warp Drive 2\" was a pre-release name, \"Nytro\" is the \nactual appellation.\n\nhttp://www.lsi.com/channel/products/storagecomponents/Pages/SolidState.aspx\n\nCheers\n\nMark\n",
"msg_date": "Fri, 06 Jul 2012 16:08:41 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSDs again, LSI Warpdrive 2 anyone?"
},
{
"msg_contents": "Hi Mark,\n\nI work for the division at LSI that supports the Nytro WarpDrive and can\nconfirm that these support poweroff safety (data is persistent in the event\nof an abrupt loss of power). The Nytro WarpDrive has onboard capacitance to\nsync intermediate ram buffers to flash, and after powerloss the card is\nimediately available for use (even as a boot device).\n\nThere is lots more information on Nytro Products here: \nhttp://www.thesmarterwaytofaster.com/ http://www.thesmarterwaytofaster.com/ \nAs well as a place to apply for a free trial of the products, so if you\nwould like to give it a try let me know.\n\nJamon Bowen \n \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/SSDs-again-LSI-Warpdrive-2-anyone-tp5715589p5715953.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Mon, 9 Jul 2012 10:16:01 -0700 (PDT)",
"msg_from": "jamonb <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSDs again, LSI Warpdrive 2 anyone?"
}
] |
[
{
"msg_contents": "I spent a good chunk of today trying to chase down why a query on\none box ran in 110ms and on another, smaller box it ran in 10ms.\nThere was no other activity on either box.\n\nBoth boxes are PG9.1.1 RHEL 6.2 x64. the faster box is a smallish \nVM. the other box is a big 40core/256GB box.\n\nThe plans between both boxes were exactly the same, so it didn't occur\nto me to run an analyze on the tables.\n\nI did a number of things including bouncing the DB and reindexing\nsome of the tables. that didn't help.\n\nEventually i separated the query out to a prepared statement and\nfound that it was spending 100ms in PREPARE on the slow box (I \nassume it was planning)\n\nI left the problem for about 30 minutes and came back and the \nquery started running at normal speed. \n\nI suspect an autovacuum kicked in, but would that sort of thing really impact\nparse/plan time to that degree? any other thoughts as to what it could have been?",
"msg_date": "Thu, 5 Jul 2012 19:12:24 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "What would effect planning time?"
}
] |
[
{
"msg_contents": "Hello PGSQL fans,\nLooking back at my posts the past couple of days and the replies that I've got, I realized that I have failed to make one point clear: we are very pleased with what we have seen from PostgreSQL so far. Let me explain. At this point of developing or porting a benchmark on a new DBMS, the team usually deals with stability, scalability, or fundamental performance issues. Our fear was that working with an open source DBMS, we'd experience more issues than usual. But we got the kit running transactions on PGSQL quickly, and after some early tests, I decided to try the kit on a larger testbed (two other folks are the developers of the benchmark code; I design, run, and analyze the experiments). I have the benchmark running on a 300,000-customer database on a 16-CPU system, unusual for this early in the prototyping phase. People who developed TPC-E (the father of our benchmark) did their prototyping on commercial databases with much smaller databases on smaller systems. On this large testbed, PGSQL has been working like a champ, and performance is what I would call decent. Put in other words, I have been pleasantly surprised by the throughput I am getting out of the system, saturating a 16-way with no visible signs of contention when we reduce the database size.\n\nWe are developing a \"reference\" kit. People are not obligated to use it to publish official results. They can use it to kick the tires, then go to one of the commercial DBMS vendors and ask for their kit for an official TPC-V publication. Even if that's all that people do with the reference kit, our team has achieved the goal that the TPC set for us. What I am trying to do is see if we can take this to the point that people use PGSQL to publish official results and use it in competitive situations. It looks possible, so I'd love to see it happen.\n\nAgain, overall, our experience with PGSQL has been positive, even in terms of performance.\n\nThanks,\nReza\n\n\nHello PGSQL fans,Looking back at my posts the past couple of days and the replies that I’ve got, I realized that I have failed to make one point clear: we are very pleased with what we have seen from PostgreSQL so far. Let me explain. At this point of developing or porting a benchmark on a new DBMS, the team usually deals with stability, scalability, or fundamental performance issues. Our fear was that working with an open source DBMS, we’d experience more issues than usual. But we got the kit running transactions on PGSQL quickly, and after some early tests, I decided to try the kit on a larger testbed (two other folks are the developers of the benchmark code; I design, run, and analyze the experiments). I have the benchmark running on a 300,000-customer database on a 16-CPU system, unusual for this early in the prototyping phase. People who developed TPC-E (the father of our benchmark) did their prototyping on commercial databases with much smaller databases on smaller systems. On this large testbed, PGSQL has been working like a champ, and performance is what I would call decent. Put in other words, I have been pleasantly surprised by the throughput I am getting out of the system, saturating a 16-way with no visible signs of contention when we reduce the database size. We are developing a “reference” kit. People are not obligated to use it to publish official results. They can use it to kick the tires, then go to one of the commercial DBMS vendors and ask for their kit for an official TPC-V publication. Even if that’s all that people do with the reference kit, our team has achieved the goal that the TPC set for us. What I am trying to do is see if we can take this to the point that people use PGSQL to publish official results and use it in competitive situations. It looks possible, so I’d love to see it happen. Again, overall, our experience with PGSQL has been positive, even in terms of performance. Thanks,Reza",
"msg_date": "Thu, 5 Jul 2012 20:56:52 -0700",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "The overall experience of TPC-V benchmark team with PostgreSQL"
}
] |
[
{
"msg_contents": "Hello,\n\nTime for a broad question. I'm aware of some specific select queries that will generate disk writes - for example, a sort operation when there's not enough work_mem can cause PG to write out some temp tables (not the correct terminology?). That scenario is easily remedied by enabling \"log_temp_files\" and specifying the threshold in temp file size at which you want logging to happen.\n\nI've recently been trying to put some of my recent reading of Greg's book and other performance-related documentation to use by seeking out queries that take an inordinate amount of time to run. Given that we're usually disk-bound, I've gotten in the habit of running an iostat in a terminal while running and tweaking some of the problem queries. I find this gives me some nice instant feedback on how hard the query is causing PG to hit the disks. What's currently puzzling me are some selects with complex joins and sorts that generate some fairly large bursts of write activity while they run. I was able to reduce this by increasing work_mem (client-side) to give the sorts an opportunity to happen in memory. I now see no temp file writes being logged, and indeed the query sped up.\n\nSo my question is, what else can generate writes when doing read-only operations? I know it sounds like a simple question, but I'm just not finding a concise answer anywhere.\n\nThanks,\n\nCharles",
"msg_date": "Fri, 6 Jul 2012 02:10:36 -0400",
"msg_from": "CSS <[email protected]>",
"msg_from_op": true,
"msg_subject": "select operations that generate disk writes"
},
{
"msg_contents": "Hello\n\n2012/7/6 CSS <[email protected]>:\n> Hello,\n>\n> Time for a broad question. I'm aware of some specific select queries that will generate disk writes - for example, a sort operation when there's not enough work_mem can cause PG to write out some temp tables (not the correct terminology?). That scenario is easily remedied by enabling \"log_temp_files\" and specifying the threshold in temp file size at which you want logging to happen.\n>\n> I've recently been trying to put some of my recent reading of Greg's book and other performance-related documentation to use by seeking out queries that take an inordinate amount of time to run. Given that we're usually disk-bound, I've gotten in the habit of running an iostat in a terminal while running and tweaking some of the problem queries. I find this gives me some nice instant feedback on how hard the query is causing PG to hit the disks. What's currently puzzling me are some selects with complex joins and sorts that generate some fairly large bursts of write activity while they run. I was able to reduce this by increasing work_mem (client-side) to give the sorts an opportunity to happen in memory. I now see no temp file writes being logged, and indeed the query sped up.\n>\n> So my question is, what else can generate writes when doing read-only operations? I know it sounds like a simple question, but I'm just not finding a concise answer anywhere.\n\nstatistics http://www.postgresql.org/docs/9.1/interactive/runtime-config-statistics.html\n\nRegards\n\nPavel\n\n>\n> Thanks,\n>\n> Charles\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Jul 2012 08:20:29 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select operations that generate disk writes"
},
{
"msg_contents": "On 07/06/2012 02:20 PM, Pavel Stehule wrote:\n> Hello\n>\n> 2012/7/6 CSS <[email protected]>:\n>> So my question is, what else can generate writes when doing read-only operations? I know it sounds like a simple question, but I'm just not finding a concise answer anywhere.\n> statistics http://www.postgresql.org/docs/9.1/interactive/runtime-config-statistics.html\n>\n\n Hint bits, too:\n\nhttp://wiki.postgresql.org/wiki/Hint_Bits\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/06/2012 02:20 PM, Pavel Stehule\n wrote:\n\n\nHello\n\n2012/7/6 CSS <[email protected]>:\n\n\n\nSo my question is, what else can generate writes when doing read-only operations? I know it sounds like a simple question, but I'm just not finding a concise answer anywhere.\n\n\n\nstatistics http://www.postgresql.org/docs/9.1/interactive/runtime-config-statistics.html\n\n\n\n\n Hint bits, too:\n\n\nhttp://wiki.postgresql.org/wiki/Hint_Bits\n\n --\n Craig Ringer",
"msg_date": "Fri, 06 Jul 2012 14:52:27 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select operations that generate disk writes"
}
] |
[
{
"msg_contents": "I have grabbed one day slow query log and analyzed it by pgfouine, to my\nsurprise, the slowest query is just a simple select statement:\n\n*select diggcontent_data_message.thing_id, diggcontent_data_message.KEY,\ndiggcontent_data_message.value, diggcontent_data_message.kind FROM\ndiggcontent_data_message WHERE diggcontent_data_message.thing_id = 3570882;*\n\n\nwhere thing_id is the primary key, guess how long it takes?\n\n754.61 seconds!!\n\nI tried explain analyze it and below is the result, which is very fast:\n\n*\n*\n*\nexplain analyze select diggcontent_data_message.thing_id,\ndiggcontent_data_message.KEY, diggcontent_data_message.value,\ndiggcontent_data_message.kind FROM diggcontent_data_message WHERE\ndiggcontent_data_message.thing_id = 3570882;\n\nQUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_thing_id_diggcontent_data_message on\ndiggcontent_data_message (cost=0.00..15.34 rows=32 width=51) (actual\ntime=0.080..0.096 rows=8 loops=1)\n Index Cond: (thing_id = 3570882)\n Total runtime: 0.115 ms\n(3 rows)\n*\n\n\nso I wonder could this simple select is innocent and affected badly by\nother queries? how could I find those queries that really slow down the\ndatabase?\nthanks!\n\nI have grabbed one day slow query log and analyzed it by pgfouine, to my surprise, the slowest query is just a simple select statement:select diggcontent_data_message.thing_id, diggcontent_data_message.KEY, diggcontent_data_message.value, diggcontent_data_message.kind FROM diggcontent_data_message WHERE diggcontent_data_message.thing_id = 3570882;\nwhere thing_id is the primary key, guess how long it takes?\n754.61 seconds!! \nI tried explain analyze it and below is the result, which is very fast:\n\n\nexplain analyze select diggcontent_data_message.thing_id, diggcontent_data_message.KEY, diggcontent_data_message.value, diggcontent_data_message.kind FROM diggcontent_data_message WHERE diggcontent_data_message.thing_id = 3570882;\n QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_thing_id_diggcontent_data_message on diggcontent_data_message (cost=0.00..15.34 rows=32 width=51) (actual time=0.080..0.096 rows=8 loops=1) Index Cond: (thing_id = 3570882)\n\n Total runtime: 0.115 ms(3 rows)\nso I wonder could this simple select is innocent and affected badly by other queries? how could I find those queries that really slow down the database? \nthanks!",
"msg_date": "Fri, 6 Jul 2012 14:17:23 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "how could select id=xx so slow?"
},
{
"msg_contents": "On Thu, Jul 5, 2012 at 11:17 PM, Yan Chunlu <[email protected]> wrote:\n> I have grabbed one day slow query log and analyzed it by pgfouine, to my\n> surprise, the slowest query is just a simple select statement:\n>\n> select diggcontent_data_message.thing_id, diggcontent_data_message.KEY,\n> diggcontent_data_message.value, diggcontent_data_message.kind FROM\n> diggcontent_data_message WHERE diggcontent_data_message.thing_id = 3570882;\n>\n>\n> where thing_id is the primary key, guess how long it takes?\n>\n> 754.61 seconds!!\n\nIs it possible that the size of the tuple is enormous? Because one\narea where I've noticed EXPLAIN ANALYZE blows away normal performance\nis when a lot of the work would be in reassembling, decompressing\n(collectively: de-TOASTING) and sending the data.\n\nEven then, that time seems excessive...but something to think about.\n\n-- \nfdr\n",
"msg_date": "Fri, 6 Jul 2012 02:46:18 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On 07/06/2012 02:17 PM, Yan Chunlu wrote:\n\n> so I wonder could this simple select is innocent and affected badly by \n> other queries? how could I find those queries that really slow down \n> the database?\n\nIt might not be other queries. Your query could be taking that long \nbecause it was blocked by a lock during maintenance work (say, an ALTER \nTABLE). It's also quite possible that it was held up by a slow \ncheckpoint; check your logs to see if there are warnings about \ncheckpoint activity.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 06 Jul 2012 19:16:27 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "Yan Chunlu wrote:\n> I have grabbed one day slow query log and analyzed it by pgfouine, to\nmy surprise, the slowest query\n> is just a simple select statement:\n> \n> select diggcontent_data_message.thing_id,\ndiggcontent_data_message.KEY,\n> diggcontent_data_message.value, diggcontent_data_message.kind FROM\ndiggcontent_data_message WHERE\n> diggcontent_data_message.thing_id = 3570882;\n> \n> where thing_id is the primary key, guess how long it takes?\n> \n> 754.61 seconds!!\n> \n> I tried explain analyze it and below is the result, which is very\nfast:\n> \n> explain analyze select diggcontent_data_message.thing_id,\ndiggcontent_data_message.KEY,\n> diggcontent_data_message.value, diggcontent_data_message.kind FROM\ndiggcontent_data_message WHERE\n> diggcontent_data_message.thing_id = 3570882;\n>\nQUERY PLAN\n>\n------------------------------------------------------------------------\n------------------------------\n> -------------------------------------------------------------\n> Index Scan using idx_thing_id_diggcontent_data_message on\ndiggcontent_data_message (cost=0.00..15.34\n> rows=32 width=51) (actual time=0.080..0.096 rows=8 loops=1)\n> Index Cond: (thing_id = 3570882)\n> Total runtime: 0.115 ms\n> (3 rows)\n> \n> so I wonder could this simple select is innocent and affected badly by\nother queries? how could I find\n> those queries that really slow down the database?\n\nAre these by any chance the aggregated costs in pgFouine?\nCould it be that the statement just ran very often and used that time in\ntotal?\n\nOther than that, it could have been blocked by something that takes an\nexclusive lock on the table.\n\nThere are no ON SELECT DO INSTEAD rules or similar things on the table,\nright?\n\nYours,\nLaurenz Albe\n",
"msg_date": "Fri, 6 Jul 2012 15:10:51 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "thanks for all the help. I checked the probability and found that:\n1, the size of tuple is small\n2, I checked the log manually and it indeed cost that much of time, not\naggregated\n\nthe value of \"log_min_messages\" in postgresql.conf is error, I have changed\nit to \"warning\", so far does not received any warning, still waiting.\n\nbeside I do see some COMMIT which is relatively slow for example:\n 60 2012-07-08 00:00:29 CST [19367]: [131-1] LOG: duration: 375.851 ms\n statement: COMMIT\n 61 2012-07-08 00:00:30 CST [19367]: [132-1] LOG: duration: 327.964 ms\n statement: COMMIT\n\nbut only one \"BEGIN\" in the same one day log file, did that influence the\nquery time too?\n\n\nOn Fri, Jul 6, 2012 at 9:10 PM, Albe Laurenz <[email protected]>wrote:\n\n> Yan Chunlu wrote:\n> > I have grabbed one day slow query log and analyzed it by pgfouine, to\n> my surprise, the slowest query\n> > is just a simple select statement:\n> >\n> > select diggcontent_data_message.thing_id,\n> diggcontent_data_message.KEY,\n> > diggcontent_data_message.value, diggcontent_data_message.kind FROM\n> diggcontent_data_message WHERE\n> > diggcontent_data_message.thing_id = 3570882;\n> >\n> > where thing_id is the primary key, guess how long it takes?\n> >\n> > 754.61 seconds!!\n> >\n> > I tried explain analyze it and below is the result, which is very\n> fast:\n> >\n> > explain analyze select diggcontent_data_message.thing_id,\n> diggcontent_data_message.KEY,\n> > diggcontent_data_message.value, diggcontent_data_message.kind FROM\n> diggcontent_data_message WHERE\n> > diggcontent_data_message.thing_id = 3570882;\n> >\n> QUERY PLAN\n> >\n> ------------------------------------------------------------------------\n> ------------------------------\n> > -------------------------------------------------------------\n> > Index Scan using idx_thing_id_diggcontent_data_message on\n> diggcontent_data_message (cost=0.00..15.34\n> > rows=32 width=51) (actual time=0.080..0.096 rows=8 loops=1)\n> > Index Cond: (thing_id = 3570882)\n> > Total runtime: 0.115 ms\n> > (3 rows)\n> >\n> > so I wonder could this simple select is innocent and affected badly by\n> other queries? how could I find\n> > those queries that really slow down the database?\n>\n> Are these by any chance the aggregated costs in pgFouine?\n> Could it be that the statement just ran very often and used that time in\n> total?\n>\n> Other than that, it could have been blocked by something that takes an\n> exclusive lock on the table.\n>\n> There are no ON SELECT DO INSTEAD rules or similar things on the table,\n> right?\n>\n> Yours,\n> Laurenz Albe\n>\n\nthanks for all the help. I checked the probability and found that:1, the size of tuple is small2, I checked the log manually and it indeed cost that much of time, not aggregatedthe value of \"log_min_messages\" in postgresql.conf is error, I have changed it to \"warning\", so far does not received any warning, still waiting.\nbeside I do see some COMMIT which is relatively slow for example: 60 2012-07-08 00:00:29 CST [19367]: [131-1] LOG: duration: 375.851 ms statement: COMMIT 61 2012-07-08 00:00:30 CST [19367]: [132-1] LOG: duration: 327.964 ms statement: COMMIT\nbut only one \"BEGIN\" in the same one day log file, did that influence the query time too?On Fri, Jul 6, 2012 at 9:10 PM, Albe Laurenz <[email protected]> wrote:\nYan Chunlu wrote:\n> I have grabbed one day slow query log and analyzed it by pgfouine, to\nmy surprise, the slowest query\n> is just a simple select statement:\n>\n> select diggcontent_data_message.thing_id,\ndiggcontent_data_message.KEY,\n> diggcontent_data_message.value, diggcontent_data_message.kind FROM\ndiggcontent_data_message WHERE\n> diggcontent_data_message.thing_id = 3570882;\n>\n> where thing_id is the primary key, guess how long it takes?\n>\n> 754.61 seconds!!\n>\n> I tried explain analyze it and below is the result, which is very\nfast:\n>\n> explain analyze select diggcontent_data_message.thing_id,\ndiggcontent_data_message.KEY,\n> diggcontent_data_message.value, diggcontent_data_message.kind FROM\ndiggcontent_data_message WHERE\n> diggcontent_data_message.thing_id = 3570882;\n>\nQUERY PLAN\n>\n------------------------------------------------------------------------\n------------------------------\n> -------------------------------------------------------------\n> Index Scan using idx_thing_id_diggcontent_data_message on\ndiggcontent_data_message (cost=0.00..15.34\n> rows=32 width=51) (actual time=0.080..0.096 rows=8 loops=1)\n> Index Cond: (thing_id = 3570882)\n> Total runtime: 0.115 ms\n> (3 rows)\n>\n> so I wonder could this simple select is innocent and affected badly by\nother queries? how could I find\n> those queries that really slow down the database?\n\nAre these by any chance the aggregated costs in pgFouine?\nCould it be that the statement just ran very often and used that time in\ntotal?\n\nOther than that, it could have been blocked by something that takes an\nexclusive lock on the table.\n\nThere are no ON SELECT DO INSTEAD rules or similar things on the table,\nright?\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 9 Jul 2012 17:20:56 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On 07/09/2012 05:20 PM, Yan Chunlu wrote:\n>\n> the value of \"log_min_messages\" in postgresql.conf is error, I have \n> changed it to \"warning\", so far does not received any warning, still \n> waiting.\nWhen trying to track down performance issues, increasing logging to at \nleast `info' would seem to be sensible.\n\nI suggest increasing your logging and enabling the auto_explain module \nso it logs slow queries. If you can afford the substantial performance \nhit you could enable its analyze mode to get details on why.\n\n>\n> beside I do see some COMMIT which is relatively slow for example:\n> 60 2012-07-08 00:00:29 CST [19367]: [131-1] LOG: duration: 375.851 \n> ms statement: COMMIT\n> 61 2012-07-08 00:00:30 CST [19367]: [132-1] LOG: duration: 327.964 \n> ms statement: COMMIT\nThat certainly is slow. Again, I suspect checkpoint activity could be at \nfault. You may need to tune to spread your checkpoints out and use more \naggressive bgwriter settings. See the wiki for performance tuning info.\n\n>\n> but only one \"BEGIN\" in the same one day log file, did that influence \n> the query time too?\n\nOnly one BEGIN in the whole day? Or do you mean \"only one BEGIN slow \nenough to trigger slow query logging\" ?\n\nDo you have a \"log_min_duration_statement\" directive set in your \npostgresql.conf ?\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 10 Jul 2012 09:25:33 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "the transaction part is wired, I have filtered BEGIN and COMMIT from a one\nday log by using:\ncat /usr/local/pgsql/data/pg_log/Saturday.log |grep -E \"BEGIN|COMMIT\"\n>trans.txt\n\nand pasted it to gist(only three BEGIN and many COMMIT):\nhttps://gist.github.com/3080600\n\nI didn't set log_min_duration_statement in the postgresql.conf, but execute\n*dbapi_con.cursor().execute(\"SET log_min_duration_statement to 30\")*\n*for every connection.*\n*the system is in production and relatively heavy load, I thought it would\nbe better not \"reload\" the db too frequently, use the **\"SET\nlog_min_duration_statement to 30\" I could turn the log off within my\napplication.*\n\n\n\nOn Tue, Jul 10, 2012 at 9:25 AM, Craig Ringer <[email protected]> wrote:\n\n> On 07/09/2012 05:20 PM, Yan Chunlu wrote:\n>\n>>\n>> the value of \"log_min_messages\" in postgresql.conf is error, I have\n>> changed it to \"warning\", so far does not received any warning, still\n>> waiting.\n>>\n> When trying to track down performance issues, increasing logging to at\n> least `info' would seem to be sensible.\n>\n> I suggest increasing your logging and enabling the auto_explain module so\n> it logs slow queries. If you can afford the substantial performance hit you\n> could enable its analyze mode to get details on why.\n>\n>\n>\n>> beside I do see some COMMIT which is relatively slow for example:\n>> 60 2012-07-08 00:00:29 CST [19367]: [131-1] LOG: duration: 375.851 ms\n>> statement: COMMIT\n>> 61 2012-07-08 00:00:30 CST [19367]: [132-1] LOG: duration: 327.964 ms\n>> statement: COMMIT\n>>\n> That certainly is slow. Again, I suspect checkpoint activity could be at\n> fault. You may need to tune to spread your checkpoints out and use more\n> aggressive bgwriter settings. See the wiki for performance tuning info.\n>\n>\n>\n>> but only one \"BEGIN\" in the same one day log file, did that influence the\n>> query time too?\n>>\n>\n> Only one BEGIN in the whole day? Or do you mean \"only one BEGIN slow\n> enough to trigger slow query logging\" ?\n>\n> Do you have a \"log_min_duration_statement\" directive set in your\n> postgresql.conf ?\n>\n> --\n> Craig Ringer\n>\n\nthe transaction part is wired, I have filtered BEGIN and COMMIT from a one day log by using:cat /usr/local/pgsql/data/pg_log/Saturday.log |grep -E \"BEGIN|COMMIT\" >trans.txtand pasted it to gist(only three BEGIN and many COMMIT):\nhttps://gist.github.com/3080600I didn't set log_min_duration_statement in the postgresql.conf, but executedbapi_con.cursor().execute(\"SET log_min_duration_statement to 30\")\nfor every connection.the system is in production and relatively heavy load, I thought it would be better not \"reload\" the db too frequently, use the \"SET log_min_duration_statement to 30\" I could turn the log off within my application.\nOn Tue, Jul 10, 2012 at 9:25 AM, Craig Ringer <[email protected]> wrote:\nOn 07/09/2012 05:20 PM, Yan Chunlu wrote:\n\n\nthe value of \"log_min_messages\" in postgresql.conf is error, I have changed it to \"warning\", so far does not received any warning, still waiting.\n\nWhen trying to track down performance issues, increasing logging to at least `info' would seem to be sensible.\n\nI suggest increasing your logging and enabling the auto_explain module so it logs slow queries. If you can afford the substantial performance hit you could enable its analyze mode to get details on why.\n\n\n\nbeside I do see some COMMIT which is relatively slow for example:\n 60 2012-07-08 00:00:29 CST [19367]: [131-1] LOG: duration: 375.851 ms statement: COMMIT\n 61 2012-07-08 00:00:30 CST [19367]: [132-1] LOG: duration: 327.964 ms statement: COMMIT\n\nThat certainly is slow. Again, I suspect checkpoint activity could be at fault. You may need to tune to spread your checkpoints out and use more aggressive bgwriter settings. See the wiki for performance tuning info.\n\n\n\n\nbut only one \"BEGIN\" in the same one day log file, did that influence the query time too?\n\n\nOnly one BEGIN in the whole day? Or do you mean \"only one BEGIN slow enough to trigger slow query logging\" ?\n\nDo you have a \"log_min_duration_statement\" directive set in your postgresql.conf ?\n\n--\nCraig Ringer",
"msg_date": "Tue, 10 Jul 2012 10:25:59 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On 07/10/2012 10:25 AM, Yan Chunlu wrote:\n> I didn't set log_min_duration_statement in the postgresql.conf, but \n> execute\n> /dbapi_con.cursor().execute(\"SET log_min_duration_statement to 30\")/\n> /for every connection./\n\nOK, same effect: You're only logging slow statements.\n\nIt's not at all surprising that BEGIN doesn't appear when a \nlog_min_duration_statement is set. It's an incredibly fast operation. \nWhat's amazing is that it appears even once - that means your database \nmust be in serious performance trouble, as BEGIN should take tenths of a \nmillisecond on an unloaded system. For example my quick test here:\n\nLOG: statement: BEGIN;\nLOG: duration: 0.193 ms\n\n... which is actually a lot slower than I expected, but hardly slow \nstatement material.\n\nThe frequent appearance of slow (multi-second) COMMIT statements in your \nslow statement logs suggests there's enough load on your database that \nthere's real contention for disk, and/or that checkpoints are stalling \ntransactions.\n\n\nFirst, you need to set log_min_messages = 'info' to allow Pg to complain \nabout things like checkpoint frequency.\n\nNow temporarily set log_checkpoints = on to record when checkpoints \nhappen and how long they take. Most likely you'll find you need to tune \ncheckpoint behaviour. Some information, albeit old, on that is here:\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm \n<http://www.westnet.com/%7Egsmith/content/postgresql/chkp-bgw-83.htm>\n\nBasically you might want to try increasing your \ncheckpoint_completion_target and making the bgwriter more aggressive - \nassuming that your performance issues are in fact checkpoint related.\n\nIt's also possible that they're just overall load, especially if you \nhave lots and lots (hundreds) of connections to the database all trying \nto do work at once without any kind of admission control or \npooling/queuing. In that case, introducing a connection pool like \nPgBouncer may help.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/10/2012 10:25 AM, Yan Chunlu\n wrote:\n\nI didn't set log_min_duration_statement in the\n postgresql.conf, but execute\ndbapi_con.cursor().execute(\"SET log_min_duration_statement\n to 30\")\nfor every connection.\n\n\n OK, same effect: You're only logging slow statements.\n\n It's not at all surprising that BEGIN doesn't appear when a\n log_min_duration_statement is set. It's an incredibly fast\n operation. What's amazing is that it appears even once - that means\n your database must be in serious performance trouble, as BEGIN\n should take tenths of a millisecond on an unloaded system. For\n example my quick test here:\n\n LOG: statement: BEGIN;\n LOG: duration: 0.193 ms\n\n ... which is actually a lot slower than I expected, but hardly slow\n statement material.\n\n The frequent appearance of slow (multi-second) COMMIT statements in\n your slow statement logs suggests there's enough load on your\n database that there's real contention for disk, and/or that\n checkpoints are stalling transactions. \n\n\n First, you need to set log_min_messages = 'info' to allow Pg to\n complain about things like checkpoint frequency.\n\n Now temporarily set log_checkpoints = on to record when checkpoints\n happen and how long they take. Most likely you'll find you need to\n tune checkpoint behaviour. Some information, albeit old, on that is\n here:\n\n \n \nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\n Basically you might want to try increasing your\n checkpoint_completion_target and making the bgwriter more aggressive\n - assuming that your performance issues are in fact checkpoint\n related.\n\n It's also possible that they're just overall load, especially if you\n have lots and lots (hundreds) of connections to the database all\n trying to do work at once without any kind of admission control or\n pooling/queuing. In that case, introducing a connection pool like\n PgBouncer may help.\n\n --\n Craig Ringer",
"msg_date": "Tue, 10 Jul 2012 10:46:25 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "great thanks for the help and explanation, I will start logging the\ninformation you mentioned and do some analysis.\n\n\n\nOn Tue, Jul 10, 2012 at 10:46 AM, Craig Ringer <[email protected]>wrote:\n\n> On 07/10/2012 10:25 AM, Yan Chunlu wrote:\n>\n> I didn't set log_min_duration_statement in the postgresql.conf, but execute\n> *dbapi_con.cursor().execute(\"SET log_min_duration_statement to 30\")*\n> *for every connection.*\n>\n>\n> OK, same effect: You're only logging slow statements.\n>\n> It's not at all surprising that BEGIN doesn't appear when a\n> log_min_duration_statement is set. It's an incredibly fast operation.\n> What's amazing is that it appears even once - that means your database must\n> be in serious performance trouble, as BEGIN should take tenths of a\n> millisecond on an unloaded system. For example my quick test here:\n>\n> LOG: statement: BEGIN;\n> LOG: duration: 0.193 ms\n>\n> ... which is actually a lot slower than I expected, but hardly slow\n> statement material.\n>\n> The frequent appearance of slow (multi-second) COMMIT statements in your\n> slow statement logs suggests there's enough load on your database that\n> there's real contention for disk, and/or that checkpoints are stalling\n> transactions.\n>\n>\n> First, you need to set log_min_messages = 'info' to allow Pg to complain\n> about things like checkpoint frequency.\n>\n> Now temporarily set log_checkpoints = on to record when checkpoints happen\n> and how long they take. Most likely you'll find you need to tune checkpoint\n> behaviour. Some information, albeit old, on that is here:\n>\n> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n>\n> Basically you might want to try increasing your\n> checkpoint_completion_target and making the bgwriter more aggressive -\n> assuming that your performance issues are in fact checkpoint related.\n>\n> It's also possible that they're just overall load, especially if you have\n> lots and lots (hundreds) of connections to the database all trying to do\n> work at once without any kind of admission control or pooling/queuing. In\n> that case, introducing a connection pool like PgBouncer may help.\n>\n> --\n> Craig Ringer\n>\n\ngreat thanks for the help and explanation, I will start logging the information you mentioned and do some analysis.On Tue, Jul 10, 2012 at 10:46 AM, Craig Ringer <[email protected]> wrote:\n\n\nOn 07/10/2012 10:25 AM, Yan Chunlu\n wrote:\n\nI didn't set log_min_duration_statement in the\n postgresql.conf, but execute\ndbapi_con.cursor().execute(\"SET log_min_duration_statement\n to 30\")\nfor every connection.\n\n\n OK, same effect: You're only logging slow statements.\n\n It's not at all surprising that BEGIN doesn't appear when a\n log_min_duration_statement is set. It's an incredibly fast\n operation. What's amazing is that it appears even once - that means\n your database must be in serious performance trouble, as BEGIN\n should take tenths of a millisecond on an unloaded system. For\n example my quick test here:\n\n LOG: statement: BEGIN;\n LOG: duration: 0.193 ms\n\n ... which is actually a lot slower than I expected, but hardly slow\n statement material.\n\n The frequent appearance of slow (multi-second) COMMIT statements in\n your slow statement logs suggests there's enough load on your\n database that there's real contention for disk, and/or that\n checkpoints are stalling transactions. \n\n\n First, you need to set log_min_messages = 'info' to allow Pg to\n complain about things like checkpoint frequency.\n\n Now temporarily set log_checkpoints = on to record when checkpoints\n happen and how long they take. Most likely you'll find you need to\n tune checkpoint behaviour. Some information, albeit old, on that is\n here:\n\n \n \n http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\n Basically you might want to try increasing your\n checkpoint_completion_target and making the bgwriter more aggressive\n - assuming that your performance issues are in fact checkpoint\n related.\n\n It's also possible that they're just overall load, especially if you\n have lots and lots (hundreds) of connections to the database all\n trying to do work at once without any kind of admission control or\n pooling/queuing. In that case, introducing a connection pool like\n PgBouncer may help.\n\n --\n Craig Ringer",
"msg_date": "Tue, 10 Jul 2012 10:58:46 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "I have logged one day data and found the checkpoint is rather\nfrequently(detail: https://gist.github.com/3088338). Not sure if it is\nnormal, but the average time of checkpoint is about 100sec~200sec, it seems\nrelated with my settings:\n\n574 checkpoint_segments = 64\n575 wal_keep_segments = 5000\n\nI set checkpoint_segments as a very large value which is because otherwise\nthe slave server always can not follow the master, should I lower that\nvalue?\n\nor the slow query is about something else? thanks!\n\nOn Tue, Jul 10, 2012 at 10:46 AM, Craig Ringer <[email protected]>wrote:\n\n> On 07/10/2012 10:25 AM, Yan Chunlu wrote:\n>\n> I didn't set log_min_duration_statement in the postgresql.conf, but execute\n> *dbapi_con.cursor().execute(\"SET log_min_duration_statement to 30\")*\n> *for every connection.*\n>\n>\n> OK, same effect: You're only logging slow statements.\n>\n> It's not at all surprising that BEGIN doesn't appear when a\n> log_min_duration_statement is set. It's an incredibly fast operation.\n> What's amazing is that it appears even once - that means your database must\n> be in serious performance trouble, as BEGIN should take tenths of a\n> millisecond on an unloaded system. For example my quick test here:\n>\n> LOG: statement: BEGIN;\n> LOG: duration: 0.193 ms\n>\n> ... which is actually a lot slower than I expected, but hardly slow\n> statement material.\n>\n> The frequent appearance of slow (multi-second) COMMIT statements in your\n> slow statement logs suggests there's enough load on your database that\n> there's real contention for disk, and/or that checkpoints are stalling\n> transactions.\n>\n>\n> First, you need to set log_min_messages = 'info' to allow Pg to complain\n> about things like checkpoint frequency.\n>\n> Now temporarily set log_checkpoints = on to record when checkpoints happen\n> and how long they take. Most likely you'll find you need to tune checkpoint\n> behaviour. Some information, albeit old, on that is here:\n>\n> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n>\n> Basically you might want to try increasing your\n> checkpoint_completion_target and making the bgwriter more aggressive -\n> assuming that your performance issues are in fact checkpoint related.\n>\n> It's also possible that they're just overall load, especially if you have\n> lots and lots (hundreds) of connections to the database all trying to do\n> work at once without any kind of admission control or pooling/queuing. In\n> that case, introducing a connection pool like PgBouncer may help.\n>\n> --\n> Craig Ringer\n>\n\nI have logged one day data and found the checkpoint is rather frequently(detail: https://gist.github.com/3088338). Not sure if it is normal, but the average time of checkpoint is about 100sec~200sec, it seems related with my settings:\n574 checkpoint_segments = 64575 wal_keep_segments = 5000I set checkpoint_segments as a very large value which is because otherwise the slave server always can not follow the master, should I lower that value? \nor the slow query is about something else? thanks!On Tue, Jul 10, 2012 at 10:46 AM, Craig Ringer <[email protected]> wrote:\n\n\nOn 07/10/2012 10:25 AM, Yan Chunlu\n wrote:\n\nI didn't set log_min_duration_statement in the\n postgresql.conf, but execute\ndbapi_con.cursor().execute(\"SET log_min_duration_statement\n to 30\")\nfor every connection.\n\n\n OK, same effect: You're only logging slow statements.\n\n It's not at all surprising that BEGIN doesn't appear when a\n log_min_duration_statement is set. It's an incredibly fast\n operation. What's amazing is that it appears even once - that means\n your database must be in serious performance trouble, as BEGIN\n should take tenths of a millisecond on an unloaded system. For\n example my quick test here:\n\n LOG: statement: BEGIN;\n LOG: duration: 0.193 ms\n\n ... which is actually a lot slower than I expected, but hardly slow\n statement material.\n\n The frequent appearance of slow (multi-second) COMMIT statements in\n your slow statement logs suggests there's enough load on your\n database that there's real contention for disk, and/or that\n checkpoints are stalling transactions. \n\n\n First, you need to set log_min_messages = 'info' to allow Pg to\n complain about things like checkpoint frequency.\n\n Now temporarily set log_checkpoints = on to record when checkpoints\n happen and how long they take. Most likely you'll find you need to\n tune checkpoint behaviour. Some information, albeit old, on that is\n here:\n\n \n \n http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\n Basically you might want to try increasing your\n checkpoint_completion_target and making the bgwriter more aggressive\n - assuming that your performance issues are in fact checkpoint\n related.\n\n It's also possible that they're just overall load, especially if you\n have lots and lots (hundreds) of connections to the database all\n trying to do work at once without any kind of admission control or\n pooling/queuing. In that case, introducing a connection pool like\n PgBouncer may help.\n\n --\n Craig Ringer",
"msg_date": "Wed, 11 Jul 2012 14:24:24 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "Yan Chunlu wrote:\n> I have logged one day data and found the checkpoint is rather\nfrequently(detail:\n> https://gist.github.com/3088338). Not sure if it is normal, but the\naverage time of checkpoint is\n> about 100sec~200sec, it seems related with my settings:\n> \n> 574 checkpoint_segments = 64\n> 575 wal_keep_segments = 5000\n> \n> I set checkpoint_segments as a very large value which is because\notherwise the slave server always can\n> not follow the master, should I lower that value?\n\nYou mean, you set wal_keep_segments high for the standby, right?\n\nwal_keep_segments has no impact on checkpoint frequency and intensity.\n\nYou are right that your checkpoint frequency is high. What is your value\nof checkpoint_timeout?\n\nYou can increase the value of checkpoint_segments to decrease the\ncheckpoint frequence, but recovery will take longer then.\n\n> or the slow query is about something else? thanks!\n\nI guess the question is how saturated the I/O system is during\ncheckpoints. But even if it is very busy, I find it hard to believe\nthat such a trivial statement can take extremely long.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Wed, 11 Jul 2012 10:23:07 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "could that because of my system is really busy?\n1, postgresql always have 400+ connections(dozens of python process using\nclient pool)\n2, the query peak is 50+/s\n3, I do have some bad performance sql executing periodically, need 100+\nsecond to complete. could those bad performance sql influence others?\n because when I execute those simple sql directly, they was fast. but the\nslow query log shows it took too much time.\n\n\n\n\nOn Wed, Jul 11, 2012 at 4:23 PM, Albe Laurenz <[email protected]>wrote:\n\n> Yan Chunlu wrote:\n> > I have logged one day data and found the checkpoint is rather\n> frequently(detail:\n> > https://gist.github.com/3088338). Not sure if it is normal, but the\n> average time of checkpoint is\n> > about 100sec~200sec, it seems related with my settings:\n> >\n> > 574 checkpoint_segments = 64\n> > 575 wal_keep_segments = 5000\n> >\n> > I set checkpoint_segments as a very large value which is because\n> otherwise the slave server always can\n> > not follow the master, should I lower that value?\n>\n> You mean, you set wal_keep_segments high for the standby, right?\n>\n> wal_keep_segments has no impact on checkpoint frequency and intensity.\n>\n> You are right that your checkpoint frequency is high. What is your value\n> of checkpoint_timeout?\n>\n> You can increase the value of checkpoint_segments to decrease the\n> checkpoint frequence, but recovery will take longer then.\n>\n> > or the slow query is about something else? thanks!\n>\n> I guess the question is how saturated the I/O system is during\n> checkpoints. But even if it is very busy, I find it hard to believe\n> that such a trivial statement can take extremely long.\n>\n> Yours,\n> Laurenz Albe\n>\n\ncould that because of my system is really busy? 1, postgresql always have 400+ connections(dozens of python process using client pool)2, the query peak is 50+/s3, I do have some bad performance sql executing periodically, need 100+ second to complete. could those bad performance sql influence others? because when I execute those simple sql directly, they was fast. but the slow query log shows it took too much time.\nOn Wed, Jul 11, 2012 at 4:23 PM, Albe Laurenz <[email protected]> wrote:\nYan Chunlu wrote:\n> I have logged one day data and found the checkpoint is rather\nfrequently(detail:\n> https://gist.github.com/3088338). Not sure if it is normal, but the\naverage time of checkpoint is\n> about 100sec~200sec, it seems related with my settings:\n>\n> 574 checkpoint_segments = 64\n> 575 wal_keep_segments = 5000\n>\n> I set checkpoint_segments as a very large value which is because\notherwise the slave server always can\n> not follow the master, should I lower that value?\n\nYou mean, you set wal_keep_segments high for the standby, right?\n\nwal_keep_segments has no impact on checkpoint frequency and intensity.\n\nYou are right that your checkpoint frequency is high. What is your value\nof checkpoint_timeout?\n\nYou can increase the value of checkpoint_segments to decrease the\ncheckpoint frequence, but recovery will take longer then.\n\n> or the slow query is about something else? thanks!\n\nI guess the question is how saturated the I/O system is during\ncheckpoints. But even if it is very busy, I find it hard to believe\nthat such a trivial statement can take extremely long.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 11 Jul 2012 19:40:35 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On Wed, Jul 11, 2012 at 9:24 AM, Yan Chunlu <[email protected]> wrote:\n> I have logged one day data and found the checkpoint is rather\n> frequently(detail: https://gist.github.com/3088338). Not sure if it is\n> normal, but the average time of checkpoint is about 100sec~200sec, it seems\n> related with my settings:\n>\n> 574 checkpoint_segments = 64\n> 575 wal_keep_segments = 5000\n>\n> I set checkpoint_segments as a very large value which is because otherwise\n> the slave server always can not follow the master, should I lower that\n> value?\n>\n> or the slow query is about something else? thanks!\n\nSome things to notice from the checkpoints log:\n* All chcekpoints are triggered by checkpoint_timeout, using up only a\ncouple log files\n* Checkpoints write out around 40MB of buffers\n* The write out period is spread out nicely like it's supposed to but\nthe sync phase is occasionally taking a very long time (more than 2\nminutes)\n\nThis looks like something (not necessarily the checkpoint sync itself)\nis overloading the IO system. You might want to monitor the IO load\nwith iostat and correlate it with the checkpoints and slow queries to\nfind the culprit. It's also possible that something else is causing\nthe issues.\n\nIf the cause is checkpoints, just making them less frequent might make\nthe problem worse. I'm assuming you have 16GB+ of RAM because you have\n4GB of shared_buffers. Just making checkpoint_timeout longer will\naccumulate a larger number of dirty buffers that will clog up the IO\nqueues even worse. If you are on Linux, lowering\ndirty_expire_centisecs or dirty_background_bytes might help to spread\nthe load out but will make overall throughput worse.\n\nOn the otherhand, if the I/O overload is from queries (more likely\nbecause some checkpoints sync quickly) there are no easy tuning\nanswers. Making queries less IO intensive is probably the best you can\ndo. From the tuning side, newer Linux kernels handle I/O fairness a\nlot better, and you could also try tweaking the I/O scheduler to\nachieve better throughput to avoid congestion or at least provide\nbetter latency for trivial queries. And of course its always possible\nto throw more hardware at the problem and upgrade the I/O subsystem.\n\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n",
"msg_date": "Wed, 11 Jul 2012 14:59:35 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "huge thanks for the patient explanations, I think you are right, it is\nreally related to the IO.\nI monitor the IO using iostat -x and found the utilize part reach 100%\nfrequently, postgresql is the only service running on that machine, so I\nthink it is either checkpoint or queries caused the problem.\n\nand I agree that checkpoint may not the problem, I guess I need to tackle\nthose damn queries.\ncurrently the data dir(pgsql/data/base) used 111GB disk space, some tables\nhas tens of millions records. could that cause the query heavy disk IO?\n when should I split the data to other machines(aka sharding)?\n\n\nand you are right the machine has 16GB memory and commodity 500GB disk.\nkernel: Linux adams 2.6.26-2-amd64 #1 SMP Mon Jun 13 16:29:33 UTC 2011\nx86_64 GNU/Linux\n\nby \"new kernel\" which version do you mean?\n\nand about those IO intensive queries, I can only tell the time used from\nslow query log, is there anything like \"explain analyze\" that shows\nspecific information about IO usage?\n\n\n\n\nOn Wed, Jul 11, 2012 at 7:59 PM, Ants Aasma <[email protected]> wrote:\n\n> On Wed, Jul 11, 2012 at 9:24 AM, Yan Chunlu <[email protected]> wrote:\n> > I have logged one day data and found the checkpoint is rather\n> > frequently(detail: https://gist.github.com/3088338). Not sure if it is\n> > normal, but the average time of checkpoint is about 100sec~200sec, it\n> seems\n> > related with my settings:\n> >\n> > 574 checkpoint_segments = 64\n> > 575 wal_keep_segments = 5000\n> >\n> > I set checkpoint_segments as a very large value which is because\n> otherwise\n> > the slave server always can not follow the master, should I lower that\n> > value?\n> >\n> > or the slow query is about something else? thanks!\n>\n> Some things to notice from the checkpoints log:\n> * All chcekpoints are triggered by checkpoint_timeout, using up only a\n> couple log files\n> * Checkpoints write out around 40MB of buffers\n> * The write out period is spread out nicely like it's supposed to but\n> the sync phase is occasionally taking a very long time (more than 2\n> minutes)\n>\n> This looks like something (not necessarily the checkpoint sync itself)\n> is overloading the IO system. You might want to monitor the IO load\n> with iostat and correlate it with the checkpoints and slow queries to\n> find the culprit. It's also possible that something else is causing\n> the issues.\n>\n> If the cause is checkpoints, just making them less frequent might make\n> the problem worse. I'm assuming you have 16GB+ of RAM because you have\n> 4GB of shared_buffers. Just making checkpoint_timeout longer will\n> accumulate a larger number of dirty buffers that will clog up the IO\n> queues even worse. If you are on Linux, lowering\n> dirty_expire_centisecs or dirty_background_bytes might help to spread\n> the load out but will make overall throughput worse.\n>\n> On the otherhand, if the I/O overload is from queries (more likely\n> because some checkpoints sync quickly) there are no easy tuning\n> answers. Making queries less IO intensive is probably the best you can\n> do. From the tuning side, newer Linux kernels handle I/O fairness a\n> lot better, and you could also try tweaking the I/O scheduler to\n> achieve better throughput to avoid congestion or at least provide\n> better latency for trivial queries. And of course its always possible\n> to throw more hardware at the problem and upgrade the I/O subsystem.\n>\n> Ants Aasma\n> --\n> Cybertec Schönig & Schönig GmbH\n> Gröhrmühlgasse 26\n> A-2700 Wiener Neustadt\n> Web: http://www.postgresql-support.de\n>\n\nhuge thanks for the patient explanations, I think you are right, it is really related to the IO.I monitor the IO using iostat -x and found the utilize part reach 100% frequently, postgresql is the only service running on that machine, so I think it is either checkpoint or queries caused the problem. \nand I agree that checkpoint may not the problem, I guess I need to tackle those damn queries.currently the data dir(pgsql/data/base) used 111GB disk space, some tables has tens of millions records. could that cause the query heavy disk IO? when should I split the data to other machines(aka sharding)? \nand you are right the machine has 16GB memory and commodity 500GB disk. kernel: Linux adams 2.6.26-2-amd64 #1 SMP Mon Jun 13 16:29:33 UTC 2011 x86_64 GNU/Linux\n\nby \"new kernel\" which version do you mean? and about those IO intensive queries, I can only tell the time used from slow query log, is there anything like \"explain analyze\" that shows specific information about IO usage?\nOn Wed, Jul 11, 2012 at 7:59 PM, Ants Aasma <[email protected]> wrote:\nOn Wed, Jul 11, 2012 at 9:24 AM, Yan Chunlu <[email protected]> wrote:\n\n\n> I have logged one day data and found the checkpoint is rather\n> frequently(detail: https://gist.github.com/3088338). Not sure if it is\n> normal, but the average time of checkpoint is about 100sec~200sec, it seems\n> related with my settings:\n>\n> 574 checkpoint_segments = 64\n> 575 wal_keep_segments = 5000\n>\n> I set checkpoint_segments as a very large value which is because otherwise\n> the slave server always can not follow the master, should I lower that\n> value?\n>\n> or the slow query is about something else? thanks!\n\nSome things to notice from the checkpoints log:\n* All chcekpoints are triggered by checkpoint_timeout, using up only a\ncouple log files\n* Checkpoints write out around 40MB of buffers\n* The write out period is spread out nicely like it's supposed to but\nthe sync phase is occasionally taking a very long time (more than 2\nminutes)\n\nThis looks like something (not necessarily the checkpoint sync itself)\nis overloading the IO system. You might want to monitor the IO load\nwith iostat and correlate it with the checkpoints and slow queries to\nfind the culprit. It's also possible that something else is causing\nthe issues.\n\nIf the cause is checkpoints, just making them less frequent might make\nthe problem worse. I'm assuming you have 16GB+ of RAM because you have\n4GB of shared_buffers. Just making checkpoint_timeout longer will\naccumulate a larger number of dirty buffers that will clog up the IO\nqueues even worse. If you are on Linux, lowering\ndirty_expire_centisecs or dirty_background_bytes might help to spread\nthe load out but will make overall throughput worse.\n\nOn the otherhand, if the I/O overload is from queries (more likely\nbecause some checkpoints sync quickly) there are no easy tuning\nanswers. Making queries less IO intensive is probably the best you can\ndo. From the tuning side, newer Linux kernels handle I/O fairness a\nlot better, and you could also try tweaking the I/O scheduler to\nachieve better throughput to avoid congestion or at least provide\nbetter latency for trivial queries. And of course its always possible\nto throw more hardware at the problem and upgrade the I/O subsystem.\n\nAnts Aasma\n--\nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de",
"msg_date": "Thu, 12 Jul 2012 00:35:59 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On 07/11/2012 07:40 PM, Yan Chunlu wrote:\n> could that because of my system is really busy?\n> 1, postgresql always have 400+ connections(dozens of python process \n> using client pool)\n> 2, the query peak is 50+/s\n> 3, I do have some bad performance sql executing periodically, need \n> 100+ second to complete. could those bad performance sql influence \n> others? because when I execute those simple sql directly, they was \n> fast. but the slow query log shows it took too much time.\n>\nOh, come on, these are the sorts of things you tell us /when you ask \nyour question/, not days later after lots of back-and-forth discussion.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/11/2012 07:40 PM, Yan Chunlu\n wrote:\n\n\n\n could that because of my system is really busy? \n 1, postgresql always have 400+ connections(dozens of python\n process using client pool)\n2, the query peak is 50+/s\n3, I do have some bad performance sql executing periodically,\n need 100+ second to complete. could those bad performance sql\n influence others? because when I execute those simple sql\n directly, they was fast. but the slow query log shows it took\n too much time.\n\n\n\n Oh, come on, these are the sorts of things you tell us when you\n ask your question, not days later after lots of back-and-forth\n discussion.\n\n --\n Craig Ringer",
"msg_date": "Thu, 12 Jul 2012 08:18:23 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "Really sorry for the lack of information, but I did asked if the slow\nqueries could affect those simple one:\n 'so I wonder could this simple select is innocent and affected badly by\nother queries? '\n\nI didn't mention the connections number because I don't think my app is\nthat busy, and the large number connections was caused by slow queries.\n\nI was wrong, everything is connected, too many factor could end with the\nresult,I am really sorry, I will tell everything I knew the next time:)\nI learnt a lot during the back and forth!\n\n\n\nOn Thursday, July 12, 2012, Craig Ringer wrote:\n\n> On 07/11/2012 07:40 PM, Yan Chunlu wrote:\n>\n> could that because of my system is really busy?\n> 1, postgresql always have 400+ connections(dozens of python process using\n> client pool)\n> 2, the query peak is 50+/s\n> 3, I do have some bad performance sql executing periodically, need 100+\n> second to complete. could those bad performance sql influence others?\n> because when I execute those simple sql directly, they was fast. but the\n> slow query log shows it took too much time.\n>\n> Oh, come on, these are the sorts of things you tell us *when you ask\n> your question*, not days later after lots of back-and-forth discussion.\n>\n> --\n> Craig Ringer\n>\n\nReally sorry for the lack of information, but I did asked if the slow queries could affect those simple one: 'so I wonder could this simple select is innocent and affected badly by other queries? '\nI didn't mention the connections number because I don't think my app is that busy, and the large number connections was caused by slow queries. I was wrong, everything is connected, too many factor could end with the result,I am really sorry, I will tell everything I knew the next time:) \nI learnt a lot during the back and forth!On Thursday, July 12, 2012, Craig Ringer wrote:\n\nOn 07/11/2012 07:40 PM, Yan Chunlu\n wrote:\n\n\n \n could that because of my system is really busy? \n 1, postgresql always have 400+ connections(dozens of python\n process using client pool)\n2, the query peak is 50+/s\n3, I do have some bad performance sql executing periodically,\n need 100+ second to complete. could those bad performance sql\n influence others? because when I execute those simple sql\n directly, they was fast. but the slow query log shows it took\n too much time.\n\n\n\n Oh, come on, these are the sorts of things you tell us when you\n ask your question, not days later after lots of back-and-forth\n discussion.\n\n --\n Craig Ringer",
"msg_date": "Thu, 12 Jul 2012 08:47:21 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On Wed, Jul 11, 2012 at 5:47 PM, Yan Chunlu <[email protected]> wrote:\n> I learnt a lot during the back and forth!\n\nGreat to hear.\n\n>> 1, postgresql always have 400+ connections(dozens of python process using client pool)\n\nNote that Postgres does not deal well with a large number of\nconnections[1]: consider shrinking the size of the pool.\n\n[1]: http://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n",
"msg_date": "Wed, 11 Jul 2012 18:07:00 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On 07/12/2012 08:47 AM, Yan Chunlu wrote:\n> Really sorry for the lack of information\nI shouldn't have grumped like that either, sorry about that.\n\n> I didn't mention the connections number because I don't think my app \n> is that busy, and the large number connections was caused by slow queries.\n\nYep - assumptions are a killer like that.\n\nNow you know to watch your system load with iostat, vmstat, top, etc and \nto monitor your overall load.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/12/2012 08:47 AM, Yan Chunlu\n wrote:\n\n\n\n Really sorry for the lack of information\n I shouldn't have grumped like that either, sorry about that.\n\nI didn't mention the connections number because I\n don't think my app is that busy, and the large number connections\n was caused by slow queries.\n\n\n Yep - assumptions are a killer like that.\n\n Now you know to watch your system load with iostat, vmstat, top, etc\n and to monitor your overall load.\n\n --\n Craig Ringer",
"msg_date": "Thu, 12 Jul 2012 10:20:26 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "after check out the wiki page Maciek mentioned, turns out that heavy\nconnection also burden the disk hardly.\nlooks like I am in the vicious circle:\n1, slow query cause connection blocked so the client request more\nconnection.\n2, more connection cause high disk io and make even the simplest query slow\nand block.\n\n\nI guess I should optimized those queries first...\n\n\nOn Thu, Jul 12, 2012 at 10:20 AM, Craig Ringer <[email protected]>wrote:\n\n> On 07/12/2012 08:47 AM, Yan Chunlu wrote:\n>\n> Really sorry for the lack of information\n>\n> I shouldn't have grumped like that either, sorry about that.\n>\n>\n> I didn't mention the connections number because I don't think my app is\n> that busy, and the large number connections was caused by slow queries.\n>\n>\n> Yep - assumptions are a killer like that.\n>\n> Now you know to watch your system load with iostat, vmstat, top, etc and\n> to monitor your overall load.\n>\n> --\n> Craig Ringer\n>\n\nafter check out the wiki page Maciek mentioned, turns out that heavy connection also burden the disk hardly.looks like I am in the vicious circle:1, slow query cause connection blocked so the client request more connection. \n2, more connection cause high disk io and make even the simplest query slow and block.I guess I should optimized those queries first...\n\nOn Thu, Jul 12, 2012 at 10:20 AM, Craig Ringer <[email protected]> wrote:\n\nOn 07/12/2012 08:47 AM, Yan Chunlu\n wrote:\n\n\n \n Really sorry for the lack of information\n I shouldn't have grumped like that either, sorry about that.\n\nI didn't mention the connections number because I\n don't think my app is that busy, and the large number connections\n was caused by slow queries.\n\n\n Yep - assumptions are a killer like that.\n\n Now you know to watch your system load with iostat, vmstat, top, etc\n and to monitor your overall load.\n\n --\n Craig Ringer",
"msg_date": "Thu, 12 Jul 2012 13:10:06 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On 07/12/2012 01:10 PM, Yan Chunlu wrote:\n> after check out the wiki page Maciek mentioned, turns out that heavy \n> connection also burden the disk hardly.\n> looks like I am in the vicious circle:\n> 1, slow query cause connection blocked so the client request more \n> connection.\n> 2, more connection cause high disk io and make even the simplest query \n> slow and block.\n\nWhile true, you can often control this by making sure you don't \ncompletely overload your hardware, queuing queries instead of running \nthem all at once.\n\nYou may still discover that your hardware can't cope with the workload \nin that your queues may just keep on getting deeper or time out. In that \ncase, you certainly need to optimise your queries, tune your database, \nand/or get bigger hardware.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/12/2012 01:10 PM, Yan Chunlu\n wrote:\n\n\n\nafter check out the wiki page Maciek mentioned, turns out\n that heavy connection also burden the disk hardly.\n looks like I am in the vicious circle:\n 1, slow query cause connection blocked so the client request\n more connection. \n2, more connection cause high disk io and make even the\n simplest query slow and block.\n\n\n\n While true, you can often control this by making sure you don't\n completely overload your hardware, queuing queries instead of\n running them all at once.\n\n You may still discover that your hardware can't cope with the\n workload in that your queues may just keep on getting deeper or time\n out. In that case, you certainly need to optimise your queries, tune\n your database, and/or get bigger hardware.\n\n --\n Craig Ringer",
"msg_date": "Thu, 12 Jul 2012 14:56:29 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "yes the system seems overloaded, I am dealing with a simple \"INSERT\" but\nnot sure if it is normal that it took more time than the explain estimated:\n\n\nexplain analyze INSERT INTO vote_content ( thing1_id, thing2_id, name,\ndate) VALUES (1,1, E'1', '2012-07-12T12:34:29.926863+00:00'::timestamptz)\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------\n Insert (cost=0.00..0.01 rows=1 width=0) (actual time=79.610..79.610\nrows=0 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.058..0.060\nrows=1 loops=1)\n Total runtime: 79.656 ms\n\nit is a table with *50 million* rows, so not sure if it is too large... I\nhave attached the schema below:\n\n Column | Type |\nModifiers\n-----------+--------------------------+------------------------------------------------------------------------------------\n rel_id | bigint | not null default\nnextval('vote_content_rel_id_seq'::regclass)\n thing1_id | bigint | not null\n thing2_id | bigint | not null\n name | character varying | not null\n date | timestamp with time zone | not null\nIndexes:\n \"vote_content_pkey\" PRIMARY KEY, btree (rel_id)\n \"vote_content_thing1_id_key\" UNIQUE, btree (thing1_id, thing2_id, name)\n \"idx_date_vote_content\" btree (date)\n \"idx_name_vote_content\" btree (name)\n \"idx_thing1_id_vote_content\" btree (thing1_id)\n \"idx_thing1_name_date_vote_content\" btree (thing1_id, name, date)\n \"idx_thing2_id_vote_content\" btree (thing2_id)\n \"idx_thing2_name_date_vote_content\" btree (thing2_id, name, date)\n\nbesides, it not the rush hour, so the disk IO is not the problem\ncurrently(I think):\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz\navgqu-sz await r_await w_await svctm %util\nsda 0.00 44.50 9.50 21.50 76.00 264.00 21.94\n 0.16 5.10 12.42 1.86 4.39 13.60\nsdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00 0.00 0.00\n\n\n\nOn Thu, Jul 12, 2012 at 2:56 PM, Craig Ringer <[email protected]> wrote:\n\n> On 07/12/2012 01:10 PM, Yan Chunlu wrote:\n>\n> after check out the wiki page Maciek mentioned, turns out that heavy\n> connection also burden the disk hardly.\n> looks like I am in the vicious circle:\n> 1, slow query cause connection blocked so the client request more\n> connection.\n> 2, more connection cause high disk io and make even the simplest query\n> slow and block.\n>\n>\n> While true, you can often control this by making sure you don't completely\n> overload your hardware, queuing queries instead of running them all at once.\n>\n> You may still discover that your hardware can't cope with the workload in\n> that your queues may just keep on getting deeper or time out. In that case,\n> you certainly need to optimise your queries, tune your database, and/or get\n> bigger hardware.\n>\n> --\n> Craig Ringer\n>\n\nyes the system seems overloaded, I am dealing with a simple \"INSERT\" but not sure if it is normal that it took more time than the explain estimated:explain analyze INSERT INTO vote_content ( thing1_id, thing2_id, name, date) VALUES (1,1, E'1', '2012-07-12T12:34:29.926863+00:00'::timestamptz)\n QUERY PLAN ------------------------------------------------------------------------------------------\n Insert (cost=0.00..0.01 rows=1 width=0) (actual time=79.610..79.610 rows=0 loops=1) -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.058..0.060 rows=1 loops=1) Total runtime: 79.656 ms\nit is a table with 50 million rows, so not sure if it is too large... I have attached the schema below: Column | Type | Modifiers \n-----------+--------------------------+------------------------------------------------------------------------------------ rel_id | bigint | not null default nextval('vote_content_rel_id_seq'::regclass)\n thing1_id | bigint | not null thing2_id | bigint | not null name | character varying | not null date | timestamp with time zone | not null\nIndexes: \"vote_content_pkey\" PRIMARY KEY, btree (rel_id) \"vote_content_thing1_id_key\" UNIQUE, btree (thing1_id, thing2_id, name) \"idx_date_vote_content\" btree (date)\n \"idx_name_vote_content\" btree (name) \"idx_thing1_id_vote_content\" btree (thing1_id) \"idx_thing1_name_date_vote_content\" btree (thing1_id, name, date)\n \"idx_thing2_id_vote_content\" btree (thing2_id) \"idx_thing2_name_date_vote_content\" btree (thing2_id, name, date)besides, it not the rush hour, so the disk IO is not the problem currently(I think):\nDevice: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %utilsda 0.00 44.50 9.50 21.50 76.00 264.00 21.94 0.16 5.10 12.42 1.86 4.39 13.60\nsdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00On Thu, Jul 12, 2012 at 2:56 PM, Craig Ringer <[email protected]> wrote:\n\n\nOn 07/12/2012 01:10 PM, Yan Chunlu\n wrote:\n\n\nafter check out the wiki page Maciek mentioned, turns out\n that heavy connection also burden the disk hardly.\n looks like I am in the vicious circle:\n 1, slow query cause connection blocked so the client request\n more connection. \n2, more connection cause high disk io and make even the\n simplest query slow and block.\n\n\n\n While true, you can often control this by making sure you don't\n completely overload your hardware, queuing queries instead of\n running them all at once.\n\n You may still discover that your hardware can't cope with the\n workload in that your queues may just keep on getting deeper or time\n out. In that case, you certainly need to optimise your queries, tune\n your database, and/or get bigger hardware.\n\n --\n Craig Ringer",
"msg_date": "Thu, 12 Jul 2012 20:48:01 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On 07/12/2012 08:48 PM, Yan Chunlu wrote:\n>\n>\n> explain analyze INSERT INTO vote_content ( thing1_id, thing2_id, name, \n> date) VALUES (1,1, E'1', '2012-07-12T12:34:29.926863+00:00'::timestamptz)\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------\n> Insert (cost=0.00..0.01 rows=1 width=0) (actual time=79.610..79.610 \n> rows=0 loops=1)\n> -> Result (cost=0.00..0.01 rows=1 width=0) (actual \n> time=0.058..0.060 rows=1 loops=1)\n> Total runtime: 79.656 ms\n>\n> it is a table with *50 million* rows, so not sure if it is too \n> large... I have attached the schema below:\n\nYou have eight indexes on that table according to the schema you showed. \nThree of them cover three columns. Those indexes are going to be \nexpensive to update; frankly I'm amazed it's that FAST to update them \nwhen they're that big.\n\nUse pg_size_pretty(pg_relation_size('index_name')) to get the index \nsizes and compare to the pg_relation_size of the table. It might be \ninformative.\n\nYou may see some insert performance benefits with a non-100% fill factor \non the indexes, but with possible performance costs to index scans.\n\n--\nCraig Ringer\n\n\n\n\n\n\n\nOn 07/12/2012 08:48 PM, Yan Chunlu\n wrote:\n\n\n\nexplain analyze INSERT INTO vote_content ( thing1_id,\n thing2_id, name, date) VALUES (1,1, E'1',\n '2012-07-12T12:34:29.926863+00:00'::timestamptz)\n\n\n\n QUERY PLAN \n \n------------------------------------------------------------------------------------------\n Insert (cost=0.00..0.01 rows=1 width=0) (actual\n time=79.610..79.610 rows=0 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n (actual time=0.058..0.060 rows=1 loops=1)\n Total runtime: 79.656 ms\n\n\nit is a table with 50 million rows, so not sure\n if it is too large... I have attached the schema below:\n\n\n\n\n You have eight indexes on that table according to the schema you\n showed. Three of them cover three columns. Those indexes are going\n to be expensive to update; frankly I'm amazed it's that FAST to\n update them when they're that big.\n\n Use pg_size_pretty(pg_relation_size('index_name')) to get the index\n sizes and compare to the pg_relation_size of the table. It might be\n informative.\n\n You may see some insert performance benefits with a non-100% fill\n factor on the indexes, but with possible performance costs to index\n scans.\n\n --\n Craig Ringer",
"msg_date": "Thu, 12 Jul 2012 22:39:28 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On Thu, Jul 12, 2012 at 3:48 PM, Yan Chunlu <[email protected]> wrote:\n> yes the system seems overloaded, I am dealing with a simple \"INSERT\" but not\n> sure if it is normal that it took more time than the explain estimated:\n\nThe estimated cost is in arbitrary units, its purpose is to compare\ndifferent execution plans, not estimate time taken. So it's completely\nnormal that it doesn't match actual time taken.\n\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n",
"msg_date": "Thu, 12 Jul 2012 19:07:17 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "On Thu, Jul 12, 2012 at 9:07 AM, Ants Aasma <[email protected]> wrote:\n> On Thu, Jul 12, 2012 at 3:48 PM, Yan Chunlu <[email protected]> wrote:\n>> yes the system seems overloaded, I am dealing with a simple \"INSERT\" but not\n>> sure if it is normal that it took more time than the explain estimated:\n>\n> The estimated cost is in arbitrary units, its purpose is to compare\n> different execution plans, not estimate time taken. So it's completely\n> normal that it doesn't match actual time taken.\n\nRight. And to make explicit what you implied, when there is only one\nto do something (like insert a row, or do maintenance on an index) it\noften doesn't even attempt to cost that at all as there is no choice.\nSo it is not just a matter of units.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 12 Jul 2012 11:53:35 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how could select id=xx so slow?"
},
{
"msg_contents": "got it, thanks! without your help I really have no idea what should be fast\nand what supposed to be slower.\n\nI also find \"select\" involves a lot of writes:\n\niotop shows:\n\n 2789 be/4 postgres 0.00 B 57.34 M 0.00 % 0.00 % postgres: goov\nconta 192.168.1.129(27300) SELECT\n\nI knew that select could cause writes, but not at this magnitude....\n\n\n\n\nOn Fri, Jul 13, 2012 at 2:53 AM, Jeff Janes <[email protected]> wrote:\n\n> On Thu, Jul 12, 2012 at 9:07 AM, Ants Aasma <[email protected]> wrote:\n> > On Thu, Jul 12, 2012 at 3:48 PM, Yan Chunlu <[email protected]>\n> wrote:\n> >> yes the system seems overloaded, I am dealing with a simple \"INSERT\"\n> but not\n> >> sure if it is normal that it took more time than the explain estimated:\n> >\n> > The estimated cost is in arbitrary units, its purpose is to compare\n> > different execution plans, not estimate time taken. So it's completely\n> > normal that it doesn't match actual time taken.\n>\n> Right. And to make explicit what you implied, when there is only one\n> to do something (like insert a row, or do maintenance on an index) it\n> often doesn't even attempt to cost that at all as there is no choice.\n> So it is not just a matter of units.\n>\n> Cheers,\n>\n> Jeff\n>\n\ngot it, thanks! without your help I really have no idea what should be fast and what supposed to be slower.I also find \"select\" involves a lot of writes:iotop shows:\n 2789 be/4 postgres 0.00 B 57.34 M 0.00 % 0.00 % postgres: goov conta 192.168.1.129(27300) SELECTI knew that select could cause writes, but not at this magnitude....\nOn Fri, Jul 13, 2012 at 2:53 AM, Jeff Janes <[email protected]> wrote:\nOn Thu, Jul 12, 2012 at 9:07 AM, Ants Aasma <[email protected]> wrote:\n\n\n> On Thu, Jul 12, 2012 at 3:48 PM, Yan Chunlu <[email protected]> wrote:\n>> yes the system seems overloaded, I am dealing with a simple \"INSERT\" but not\n>> sure if it is normal that it took more time than the explain estimated:\n>\n> The estimated cost is in arbitrary units, its purpose is to compare\n> different execution plans, not estimate time taken. So it's completely\n> normal that it doesn't match actual time taken.\n\nRight. And to make explicit what you implied, when there is only one\nto do something (like insert a row, or do maintenance on an index) it\noften doesn't even attempt to cost that at all as there is no choice.\nSo it is not just a matter of units.\n\nCheers,\n\nJeff",
"msg_date": "Fri, 13 Jul 2012 12:02:02 +0800",
"msg_from": "Yan Chunlu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how could select id=xx so slow?"
}
] |
[
{
"msg_contents": "Hi to all, \n\n\nI run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4, a strong bi quad-proc with RAM 16Go. My biggest db contains at least 100 000 tables. Last time, I started a Java process I use to make some change on it, it created 170 new tables and it took one full minute. That is a very long time for such a process on such a server ! \nDo you think there could be some configuration tuning to do to improve the performance for create tables ? \nOr do I have to use tablespaces because 100000 files in a single folder is a too many for OS ? \nIt's possible to migrate the DB in 9.1 version. Do you think it could solve the trouble ? \n\n\nThank you all for your advices, \n\n\nBest regards \n\n\nSylvain \nHi to all,I run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4, a strong bi quad-proc with RAM 16Go. My biggest db contains at least 100 000 tables. Last time, I started a Java process I use to make some change on it, it created 170 new tables and it took one full minute. That is a very long time for such a process on such a server ! Do you think there could be some configuration tuning to do to improve the performance for create tables ? Or do I have to use tablespaces because 100000 files in a single folder is a too many for OS ? It's possible to migrate the DB in 9.1 version. Do you think it could solve the trouble ?Thank you all for your advices,Best regardsSylvain",
"msg_date": "Fri, 06 Jul 2012 17:15:49 +0200 (CEST)",
"msg_from": "Sylvain CAILLET <[email protected]>",
"msg_from_op": true,
"msg_subject": "Create tables performance"
},
{
"msg_contents": "On Fri, Jul 6, 2012 at 8:15 AM, Sylvain CAILLET <[email protected]> wrote:\n> Hi to all,\n>\n> I run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4, a\n> strong bi quad-proc with RAM 16Go. My biggest db contains at least 100 000\n> tables. Last time, I started a Java process I use to make some change on it,\n> it created 170 new tables and it took one full minute. That is a very long\n> time for such a process on such a server !\n\nWhat if you create those 170 tables in a database without 100,000\npre-existing tables?\n\nWhat else does your script do?\n\nI can create 170 tables each with 10 rows in a database containing\n100,000 other tables in less than a second on 8.3.9, either all in one\ntransaction or in ~340 separate transactions.\n\nSo whatever problem you are having is probably specific to your\ndetails, not a generic issue. It is hard to say if an upgrade would\nhelp if the root cause is not known.\n\nWhat do the standard monitoring tools show? Are you IO bound, or CPU\nbound? If CPU, is it in postgres or in java?\n\n> Do you think there could be some configuration tuning to do to improve the\n> performance for create tables ?\n> Or do I have to use tablespaces because 100000 files in a single folder is a\n> too many for OS ?\n\nI doubt that that is a problem on any reasonably modern Linux.\n\nCheers,\n\nJeff\n",
"msg_date": "Fri, 6 Jul 2012 10:22:16 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create tables performance"
},
{
"msg_contents": "On 06/07/12 16:15, Sylvain CAILLET wrote:\n> Hi to all,\n>\n> I run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4,\n> a strong bi quad-proc with RAM 16Go. My biggest db contains at least 100\n> 000 tables.\n\nThat is a *lot* of tables and it's probably going to be slow whatever \nyou do.\n\n> Last time, I started a Java process I use to make some\n> change on it, it created 170 new tables and it took one full minute.\n\nWhat are you using all these tables for? I'm assuming most of them have \nidentical structure.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 06 Jul 2012 19:12:24 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create tables performance"
},
{
"msg_contents": "On 07/06/2012 11:15 PM, Sylvain CAILLET wrote:\n> Hi to all,\n>\n> I run Postgresql 8.3.9 on a dedicated server running with Debian \n> 5.0.4, a strong bi quad-proc with RAM 16Go. My biggest db contains at \n> least 100 000 tables. Last time, I started a Java process I use to \n> make some change on it, it created 170 new tables and it took one full \n> minute. That is a very long time for such a process on such a server !\nIf you create and drop a lot of tables, you need to make sure you're \nvacuuming the pg_catalog tables frequently. Newer versions mostly take \ncare of this for you, but on 8.3 you'll at minimum have to turn \nautovaccum right up.\n\nSee what happens if you run in psql, as a Pg superuser (usually the \n\"postgres\" account):\n\n CLUSTER pg_class_oid_index ON pg_catalog.pg_class;\n CLUSTER pg_type_oid_index ON pg_catalog.pg_type;\n CLUSTER pg_attribute_relid_attnam_index ON pg_catalog.pg_attribute;\n CLUSTER pg_index_indexrelid_index ON pg_catalog.pg_index;\n\nI'm guessing you have severe table bloat in your catalogs, in which case \nthis may help. I use CLUSTER instead of VACCUUM FULL because on old \nversions like 8.3 it'll run faster and sort the indexes for you too.\n\n> Do you think there could be some configuration tuning to do to improve \n> the performance for create tables ?\n> Or do I have to use tablespaces because 100000 files in a single \n> folder is a too many for OS ?\n\nThat won't be a problem unless your OS and file system are truly crap.\n\n--\nCraig Ringer\n\n\n\n\n\n\n\nOn 07/06/2012 11:15 PM, Sylvain CAILLET\n wrote:\n\n\n\n\nHi to all,\n\n\nI run Postgresql 8.3.9 on a dedicated server running with\n Debian 5.0.4, a strong bi quad-proc with RAM 16Go. My biggest\n db contains at least 100 000 tables. Last time, I started a\n Java process I use to make some change on it, it created 170\n new tables and it took one full minute. That is a very long\n time for such a process on such a server ! \n\n\n\n If you create and drop a lot of tables, you need to make sure you're\n vacuuming the pg_catalog tables frequently. Newer versions mostly\n take care of this for you, but on 8.3 you'll at minimum have to turn\n autovaccum right up.\n\n See what happens if you run in psql, as a Pg superuser (usually the\n \"postgres\" account):\n\n CLUSTER pg_class_oid_index ON pg_catalog.pg_class;\n CLUSTER pg_type_oid_index ON pg_catalog.pg_type;\n CLUSTER pg_attribute_relid_attnam_index ON\n pg_catalog.pg_attribute;\n CLUSTER pg_index_indexrelid_index ON pg_catalog.pg_index;\n\n I'm guessing you have severe table bloat in your catalogs, in which\n case this may help. I use CLUSTER instead of VACCUUM FULL because on\n old versions like 8.3 it'll run faster and sort the indexes for you\n too. \n\n\n\nDo you think there could be some configuration tuning to do\n to improve the performance for create tables ? \nOr do I have to use tablespaces because 100000 files in a\n single folder is a too many for OS ? \n\n\n\n\n That won't be a problem unless your OS and file system are truly\n crap.\n\n --\n Craig Ringer",
"msg_date": "Sat, 07 Jul 2012 10:27:39 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create tables performance"
},
{
"msg_contents": "Hi, \n\n\nThank you all for your help. \n\n\n@Jeff : my daemon creates these tables at start time so it doesn't do anything else at the same time. The CPU is loaded between 20% and 25%. \n@Richard : Sure the DB number of table is quite big and sure most of them have the same structure, but it's very hard to move it now so I have to deal with it for a while ! \n@Craig : I can't run any of the queries. Fo example, \" CLUSTER pg_class_oid_index ON pg_catalog.pg_class; \" throws a \" ERROR: \"pg_class\" is a system catalog \" exception. But, using VACUUM FULL, it's done in less than a second. Autovacuum is on but not tuned in postgresql configuration file. \n\n\nSylvain Caillet \n----- Mail original -----\n\n\n\nOn 07/06/2012 11:15 PM, Sylvain CAILLET wrote: \n\n<blockquote>\n\n\nHi to all, \n\n\nI run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4, a strong bi quad-proc with RAM 16Go. My biggest db contains at least 100 000 tables. Last time, I started a Java process I use to make some change on it, it created 170 new tables and it took one full minute. That is a very long time for such a process on such a server ! \n\n\nIf you create and drop a lot of tables, you need to make sure you're vacuuming the pg_catalog tables frequently. Newer versions mostly take care of this for you, but on 8.3 you'll at minimum have to turn autovaccum right up. \n\nSee what happens if you run in psql, as a Pg superuser (usually the \"postgres\" account): \n\nCLUSTER pg_class_oid_index ON pg_catalog.pg_class; \nCLUSTER pg_type_oid_index ON pg_catalog.pg_type; \nCLUSTER pg_attribute_relid_attnam_index ON pg_catalog.pg_attribute; \nCLUSTER pg_index_indexrelid_index ON pg_catalog.pg_index; \n\nI'm guessing you have severe table bloat in your catalogs, in which case this may help. I use CLUSTER instead of VACCUUM FULL because on old versions like 8.3 it'll run faster and sort the indexes for you too. \n\n\n<blockquote>\n\n\nDo you think there could be some configuration tuning to do to improve the performance for create tables ? \nOr do I have to use tablespaces because 100000 files in a single folder is a too many for OS ? \n\n</blockquote>\n\nThat won't be a problem unless your OS and file system are truly crap. \n\n-- \nCraig Ringer \n\n\n</blockquote>\n\n\nHi,Thank you all for your help. @Jeff : my daemon creates these tables at start time so it doesn't do anything else at the same time. The CPU is loaded between 20% and 25%.@Richard : Sure the DB number of table is quite big and sure most of them have the same structure, but it's very hard to move it now so I have to deal with it for a while ! @Craig : I can't run any of the queries. Fo example, \"CLUSTER pg_class_oid_index ON pg_catalog.pg_class;\" throws a \"ERROR: \"pg_class\" is a system catalog\" exception. But, using VACUUM FULL, it's done in less than a second. Autovacuum is on but not tuned in postgresql configuration file.Sylvain Caillet\nOn 07/06/2012 11:15 PM, Sylvain CAILLET\n wrote:\n\n\n\nHi to all,\n\n\nI run Postgresql 8.3.9 on a dedicated server running with\n Debian 5.0.4, a strong bi quad-proc with RAM 16Go. My biggest\n db contains at least 100 000 tables. Last time, I started a\n Java process I use to make some change on it, it created 170\n new tables and it took one full minute. That is a very long\n time for such a process on such a server ! \n\n\n\n If you create and drop a lot of tables, you need to make sure you're\n vacuuming the pg_catalog tables frequently. Newer versions mostly\n take care of this for you, but on 8.3 you'll at minimum have to turn\n autovaccum right up.\n\n See what happens if you run in psql, as a Pg superuser (usually the\n \"postgres\" account):\n\n CLUSTER pg_class_oid_index ON pg_catalog.pg_class;\n CLUSTER pg_type_oid_index ON pg_catalog.pg_type;\n CLUSTER pg_attribute_relid_attnam_index ON\n pg_catalog.pg_attribute;\n CLUSTER pg_index_indexrelid_index ON pg_catalog.pg_index;\n\n I'm guessing you have severe table bloat in your catalogs, in which\n case this may help. I use CLUSTER instead of VACCUUM FULL because on\n old versions like 8.3 it'll run faster and sort the indexes for you\n too. \n\n\n\nDo you think there could be some configuration tuning to do\n to improve the performance for create tables ? \nOr do I have to use tablespaces because 100000 files in a\n single folder is a too many for OS ? \n\n\n\n\n That won't be a problem unless your OS and file system are truly\n crap.\n\n --\n Craig Ringer",
"msg_date": "Mon, 09 Jul 2012 08:49:36 +0200 (CEST)",
"msg_from": "Sylvain CAILLET <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Create tables performance"
},
{
"msg_contents": "On Sun, Jul 8, 2012 at 11:49 PM, Sylvain CAILLET <[email protected]> wrote:\n> Hi,\n>\n> Thank you all for your help.\n>\n> @Jeff : my daemon creates these tables at start time so it doesn't do\n> anything else at the same time. The CPU is loaded between 20% and 25%.\n\nHow does it decide which tables to create? Is it querying the\nexisting tables to figure out what new ones to make? Is the rest of\nthe time going to IO wait?\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 9 Jul 2012 10:02:50 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create tables performance"
},
{
"msg_contents": "Yes, you're right ! The process checks if all these tables exist before creating them. So it might be the SELECT that takes time. To check existence, I use the following query : \nselect * from pg_tables where tablename='the_table'; \nMay be it's not the best way. And I launch a query per table ! Not good at all. \n\nThank you all, I will optimize this. \n\nSylvain \n\n----- Mail original -----\n\n> On Sun, Jul 8, 2012 at 11:49 PM, Sylvain CAILLET\n> <[email protected]> wrote:\n> > Hi,\n> >\n> > Thank you all for your help.\n> >\n> > @Jeff : my daemon creates these tables at start time so it doesn't\n> > do\n> > anything else at the same time. The CPU is loaded between 20% and\n> > 25%.\n\n> How does it decide which tables to create? Is it querying the\n> existing tables to figure out what new ones to make? Is the rest of\n> the time going to IO wait?\n\n> Cheers,\n\n> Jeff\n\nYes, you're right ! The process checks if all these tables exist before creating them. So it might be the SELECT that takes time. To check existence, I use the following query :select * from pg_tables where tablename='the_table';May be it's not the best way. And I launch a query per table ! Not good at all.Thank you all, I will optimize this.SylvainOn Sun, Jul 8, 2012 at 11:49 PM, Sylvain CAILLET <[email protected]> wrote:> Hi,>> Thank you all for your help.>> @Jeff : my daemon creates these tables at start time so it doesn't do> anything else at the same time. The CPU is loaded between 20% and 25%.How does it decide which tables to create? Is it querying theexisting tables to figure out what new ones to make? Is the rest ofthe time going to IO wait?Cheers,Jeff",
"msg_date": "Tue, 10 Jul 2012 08:27:40 +0200 (CEST)",
"msg_from": "Sylvain CAILLET <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Create tables performance"
}
] |
[
{
"msg_contents": "I have a query which joins to a nested union and I'm getting a plan which never returns. Here is the query simplified as much as possible:\n\nselect 'anything' as result\n from \"Attribute\" as A1\n inner join\n (\n select R.\"TargetID\" as \"SourceID\"\n from \"Relationship\" as R\n union\n select A2.\"PersonID\" as \"SourceID\"\n from \"Attribute\" as A2\n ) as X on (A1.\"PersonID\" = X.\"SourceID\")\n where (A1.\"ID\" = 124791200)\n\n(this seems like a strange query, but it is simplified to eliminate everything I could)\n\nHere is the execution plan I am seeing:\nhttp://explain.depesz.com/s/BwUd\n\nMerge Join (cost=229235406.73..244862067.56 rows=727 width=0)\n Output: 'anything'\n Merge Cond: (r.\"TargetID\" = a1.\"PersonID\")\n -> Unique (cost=229235336.51..233700093.63 rows=892951424 width=8)\n Output: r.\"TargetID\"\n -> Sort (cost=229235336.51..231467715.07 rows=892951424 width=8)\n Output: r.\"TargetID\"\n Sort Key: r.\"TargetID\"\n -> Append (cost=0.00..23230287.48 rows=892951424 width=8)\n -> Seq Scan on public.\"Relationship\" r (cost=0.00..5055084.88 rows=328137088 width=8)\n Output: r.\"TargetID\"\n -> Seq Scan on public.\"Attribute\" a2 (cost=0.00..9245688.36 rows=564814336 width=8)\n Output: a2.\"PersonID\"\n -> Materialize (cost=70.22..70.23 rows=1 width=8)\n Output: a1.\"PersonID\"\n -> Sort (cost=70.22..70.23 rows=1 width=8)\n Output: a1.\"PersonID\"\n Sort Key: a1.\"PersonID\"\n -> Index Scan using \"UIDX_Attribute_ID\" on public.\"Attribute\" a1 (cost=0.00..70.21 rows=1 width=8)\n Output: a1.\"PersonID\"\n Index Cond: (a1.\"ID\" = 124791200)\n\nAs you can see, the Relationship table has ~300 million rows and Attribute has ~500 million rows. I could not include the explain analyze because the query never completes. Going to \"union all\" fixes it, nesting the restriction fixes it, making the restriction limit X rather than A1 fixes it. Unfortunately, none of these \"fixes\" are acceptable within the context of the complete query this was simplified from.\n\nVersion string: PostgreSQL 9.1.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit\nOS: CentOS 5\nRAM: 128GB\nProcessor: AMD Opteron(tm) 6174, 24 cores\n\nI've not changed any configuration settings from the based EnterpriseDB installer besides shared_buffers. Presently the DB is static, and I have executed analyze to update the stats since loading it.\n\nRelevant schema:\n\nCREATE TABLE \"Attribute\"\n(\n \"ID\" bigint NOT NULL,\n \"PersonID\" bigint NOT NULL,\n \"Type\" character varying(5) NOT NULL\n)\nWITH ( OIDS=FALSE);\n\nCREATE INDEX \"IDX_Attribute_PersonID_Type\" ON \"Attribute\" USING btree\n (\"PersonID\" , \"Type\" COLLATE pg_catalog.\"default\" );\n\nCREATE UNIQUE INDEX \"UIDX_Attribute_ID\"\n ON \"Attribute\" USING btree (\"ID\" );\n\nCREATE TABLE \"Relationship\"\n(\n \"ID\" bigint NOT NULL,\n \"TargetID\" bigint NOT NULL\n) WITH ( OIDS=FALSE);\n\nCREATE INDEX \"IDX_Relationship_TargetID\"\n ON \"Relationship\" USING btree (\"TargetID\" );\n\nCREATE UNIQUE INDEX \"UIDX_Relationship_ID\"\n ON \"Relationship\" USING btree (\"ID\" );\n\nThanks,\n\n-Nate\n\n\n\n\n\n\n\n\n\n\nI have a query which joins to a nested union and I’m getting a plan which never returns. Here is the query simplified as much as possible:\n \nselect 'anything' as result\n from \"Attribute\" as A1\n inner join \n (\n select R.\"TargetID\" as \"SourceID\"\n from \"Relationship\" as R\n union\n select A2.\"PersonID\" as \"SourceID\"\n from \"Attribute\" as A2\n ) as X on (A1.\"PersonID\" = X.\"SourceID\")\n where (A1.\"ID\" = 124791200)\n \n(this seems like a strange query, but it is simplified to eliminate everything I could)\n \nHere is the execution plan I am seeing:\nhttp://explain.depesz.com/s/BwUd\n \nMerge Join (cost=229235406.73..244862067.56 rows=727 width=0)\n Output: 'anything'\n Merge Cond: (r.\"TargetID\" = a1.\"PersonID\")\n -> Unique (cost=229235336.51..233700093.63 rows=892951424 width=8)\n Output: r.\"TargetID\"\n -> Sort (cost=229235336.51..231467715.07 rows=892951424 width=8)\n Output: r.\"TargetID\"\n Sort Key: r.\"TargetID\"\n -> Append (cost=0.00..23230287.48 rows=892951424 width=8)\n -> Seq Scan on public.\"Relationship\" r (cost=0.00..5055084.88 rows=328137088 width=8)\n Output: r.\"TargetID\"\n -> Seq Scan on public.\"Attribute\" a2 (cost=0.00..9245688.36 rows=564814336 width=8)\n Output: a2.\"PersonID\"\n -> Materialize (cost=70.22..70.23 rows=1 width=8)\n Output: a1.\"PersonID\"\n -> Sort (cost=70.22..70.23 rows=1 width=8)\n Output: a1.\"PersonID\"\n Sort Key: a1.\"PersonID\"\n -> Index Scan using \"UIDX_Attribute_ID\" on public.\"Attribute\" a1 (cost=0.00..70.21 rows=1 width=8)\n Output: a1.\"PersonID\"\n Index Cond: (a1.\"ID\" = 124791200)\n \nAs you can see, the Relationship table has ~300 million rows and Attribute has ~500 million rows. I could not include the explain analyze because the query never completes. Going to “union all” fixes it, nesting the restriction fixes\n it, making the restriction limit X rather than A1 fixes it. Unfortunately, none of these “fixes” are acceptable within the context of the complete query this was simplified from.\n \nVersion string: PostgreSQL 9.1.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit\nOS: CentOS 5\nRAM: 128GB\nProcessor: AMD Opteron(tm) 6174, 24 cores\n \nI’ve not changed any configuration settings from the based EnterpriseDB installer besides shared_buffers. Presently the DB is static, and I have executed analyze to update the stats since loading it. \n\n \nRelevant schema:\n \nCREATE TABLE \"Attribute\"\n(\n \"ID\" bigint NOT NULL,\n \"PersonID\" bigint NOT NULL,\n \"Type\" character varying(5) NOT NULL\n)\nWITH ( OIDS=FALSE);\n \nCREATE INDEX \"IDX_Attribute_PersonID_Type\" ON \"Attribute\" USING btree\n (\"PersonID\" , \"Type\" COLLATE pg_catalog.\"default\" );\n \nCREATE UNIQUE INDEX \"UIDX_Attribute_ID\"\n ON \"Attribute\" USING btree (\"ID\" );\n \nCREATE TABLE \"Relationship\"\n(\n \"ID\" bigint NOT NULL,\n \"TargetID\" bigint NOT NULL\n) WITH ( OIDS=FALSE);\n \nCREATE INDEX \"IDX_Relationship_TargetID\"\n ON \"Relationship\" USING btree (\"TargetID\" );\n \nCREATE UNIQUE INDEX \"UIDX_Relationship_ID\"\n ON \"Relationship\" USING btree (\"ID\" );\n \nThanks,\n \n-Nate",
"msg_date": "Sat, 7 Jul 2012 22:35:06 +0000",
"msg_from": "Nate Allan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Terrible plan for join to nested union"
},
{
"msg_contents": "Nate Allan <[email protected]> writes:\n> I have a query which joins to a nested union and I'm getting a plan which never returns. Here is the query simplified as much as possible:\n> select 'anything' as result\n> from \"Attribute\" as A1\n> inner join\n> (\n> select R.\"TargetID\" as \"SourceID\"\n> from \"Relationship\" as R\n> union\n> select A2.\"PersonID\" as \"SourceID\"\n> from \"Attribute\" as A2\n> ) as X on (A1.\"PersonID\" = X.\"SourceID\")\n> where (A1.\"ID\" = 124791200)\n\nWhat exactly are you trying to accomplish here? AFAICS, the UNION\nresult must include every possible value of Attribute.PersonID, which\nmeans the inner join cannot eliminate any rows of A1 (except those with\nnull PersonID), which seems a tad silly.\n\nAnyway, I wonder whether you'd get better results with an EXISTS over\na correlated UNION ALL subquery, ie, something like\n\nselect 'anything' as result\n from \"Attribute\" as A1\n where (A1.\"ID\" = 124791200)\n and exists (\n select 1 from \"Relationship\" as R\n where R.\"TargetID\" = A1.\"PersonID\"\n union all\n select 1 from \"Attribute\" as A2\n where A2.\"PersonID\" = A1.\"PersonID\"\n )\n\nsince you're evidently hoping that the EXISTS won't need to be evaluated\nfor very many rows of A1. Or you could use an OR of two EXISTS to skip\nthe UNION altogether.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 07 Jul 2012 20:08:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Terrible plan for join to nested union"
},
{
"msg_contents": "Thanks for your reply Tom.\n\n>> I have a query which joins to a nested union and I'm getting a plan which never returns. Here is the query simplified as much as possible:\n>> select 'anything' as result\n>> from \"Attribute\" as A1\n>> inner join\n>> (\n>> select R.\"TargetID\" as \"SourceID\"\n>> from \"Relationship\" as R\n>> union\n>> select A2.\"PersonID\" as \"SourceID\"\n>> from \"Attribute\" as A2\n>> ) as X on (A1.\"PersonID\" = X.\"SourceID\")\n>> where (A1.\"ID\" = 124791200)\n>\n> AFAICS, the UNION result must include every possible value of Attribute.PersonID, which means the inner join cannot \n>eliminate any rows of A1 (except those with null PersonID), which seems a tad silly.\n\nIt seems to me that the join condition (and hence the restriction) should be pushed down into both sides of the union to bring the cardinality limit from millions to 1. I'm imagining a rewrite like this: \n\tR(a) J (b U c) -> (b J R(a)) U (c J R(a))\n...where R = Restrict, J = Join, U = Union\n\nThis is the kind of rewrite I would make as a sentient being and it's one that at least one other DBMS I know of makes.\n\nAs an aside, even though not as good as pushing down the restriction, the plan that the \"union all\" produces is decent performance-wise:\nhttp://explain.depesz.com/s/OZq\nIt seems to me that a similar alternative could be applied for a distinct union by using two Index Scans followed by a Merge Join.\n\n>What exactly are you trying to accomplish here?\n\nI state in my post that there are several ways to rewrite the query to work-around the issue; I'm not really asking for a work-around but a) wondering why the plan is so bad; and b) asking if it could be fixed if possible. Unfortunately rewriting the query isn't a trivial matter in our case because the X (union) part of the query is represented logically as a view, which is expected to be restricted and/or joined so as not to actually materialize the actual union. Unfortunately the PostgreSQL planner seems to want to actually materialize that view. Working around this would basically entail not using the view, which is used all over the place, and instead duplicating the view's logic except pushing the restrictions and/or joins down into both sides of the union in each case. I could do that, but doing so would be: a) against the spirit of the Relational Model; b) against the spirit of \"fix the planner rather than add optimizer hints\"; c) a royal pain because it causes a rewrite of application logic; d) a point for at least one other DBMS's optimizer. :-)\n\n>Anyway, I wonder whether you'd get better results with an EXISTS over a correlated UNION ALL subquery, ie, something like\n> ...\n\nThanks for the work-arounds, but again, that's not quite what I'm after.\n\nBest,\n\n-Nate\n\n\n",
"msg_date": "Sun, 8 Jul 2012 05:50:01 +0000",
"msg_from": "Nate Allan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Terrible plan for join to nested union"
},
{
"msg_contents": "2012/7/8 Nate Allan <[email protected]>:\n> Thanks for your reply Tom.\n>\n>>> I have a query which joins to a nested union and I'm getting a plan which never returns. Here is the query simplified as much as possible:\n>>> select 'anything' as result\n>>> from \"Attribute\" as A1\n>>> inner join\n>>> (\n>>> select R.\"TargetID\" as \"SourceID\"\n>>> from \"Relationship\" as R\n>>> union\n>>> select A2.\"PersonID\" as \"SourceID\"\n>>> from \"Attribute\" as A2\n>>> ) as X on (A1.\"PersonID\" = X.\"SourceID\")\n>>> where (A1.\"ID\" = 124791200)\n>>\n>> AFAICS, the UNION result must include every possible value of Attribute.PersonID, which means the inner join cannot\n>>eliminate any rows of A1 (except those with null PersonID), which seems a tad silly.\n>\n> It seems to me that the join condition (and hence the restriction) should be pushed down into both sides of the union to bring the cardinality limit from millions to 1. I'm imagining a rewrite like this:\n> R(a) J (b U c) -> (b J R(a)) U (c J R(a))\n> ...where R = Restrict, J = Join, U = Union\n>\n> This is the kind of rewrite I would make as a sentient being and it's one that at least one other DBMS I know of makes.\n>\n> As an aside, even though not as good as pushing down the restriction, the plan that the \"union all\" produces is decent performance-wise:\n> http://explain.depesz.com/s/OZq\n> It seems to me that a similar alternative could be applied for a distinct union by using two Index Scans followed by a Merge Join.\n>\n>>What exactly are you trying to accomplish here?\n>\n> I state in my post that there are several ways to rewrite the query to work-around the issue; I'm not really asking for a work-around but a) wondering why the plan is so bad; and b) asking if it could be fixed if possible. Unfortunately rewriting the query isn't a trivial matter in our case because the X (union) part of the query is represented logically as a view, which is expected to be restricted and/or joined so as not to actually materialize the actual union. Unfortunately the PostgreSQL planner seems to want to actually materialize that view. Working around this would basically entail not using the view, which is used all over the place, and instead duplicating the view's logic except pushing the restrictions and/or joins down into both sides of the union in each case. I could do that, but doing so would be: a) against the spirit of the Relational Model; b) against the spirit of \"fix the planner rather than add optimizer hints\"; c) a royal pain because it causes a rewrite of application logic; d) a point for at least one other DBMS's optimizer. :-)\n\nyou are using EAV schema - it is against to relation model enough :)\n\nthis schema has the most terrible performance for large datasets -\nlooks on hstore instead\n\nRegards\n\nPavel\n\n>\n>>Anyway, I wonder whether you'd get better results with an EXISTS over a correlated UNION ALL subquery, ie, something like\n>> ...\n>\n> Thanks for the work-arounds, but again, that's not quite what I'm after.\n>\n> Best,\n>\n> -Nate\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 8 Jul 2012 08:03:26 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Terrible plan for join to nested union"
},
{
"msg_contents": ">you are using EAV schema - it is against to relation model enough :)\r\n>this schema has the most terrible performance for large datasets - looks on hstore instead\r\n\r\n>Pavel\r\n\r\nActually despite the table named Attribute, I am not doing EAV though I can see why you'd think that. Attributes are part of the conceptual domain I'm modeling and I assure you there are first class columns in the schema for everything. Regardless, that has nothing to do with my performance problem with joining to a nested union.\r\n\r\n-Nate\r\n\n\n",
"msg_date": "Sun, 8 Jul 2012 09:43:58 +0000",
"msg_from": "Nate Allan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Terrible plan for join to nested union"
},
{
"msg_contents": "Nate Allan <[email protected]> writes:\n> It seems to me that the join condition (and hence the restriction) should be pushed down into both sides of the union to bring the cardinality limit from millions to 1. I'm imagining a rewrite like this: \n> \tR(a) J (b U c) -> (b J R(a)) U (c J R(a))\n> ...where R = Restrict, J = Join, U = Union\n\n[ eyes that suspiciously ... ] I'm not convinced that such a\ntransformation is either correct in general (you seem to be assuming\nat least that A's join column is unique, and what is the UNION operator\nsupposed to do with A's other columns?) or likely to lead to a\nperformance improvement in general.\n\nWe possibly could push down a join condition on the inner side of a\nnestloop, similarly to what's done in the UNION ALL case ... but that\nwould require a complete refactoring of what the planner does with\nUNIONs. By and large, very little optimization effort has been put\ninto non-ALL UNION (or INTERSECT or EXCEPT). You should not expect\nthat to change on a time scale of less than years.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 08 Jul 2012 11:56:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Terrible plan for join to nested union"
},
{
"msg_contents": ">>Nate Allan <[email protected]> writes:\n>> It seems to me that the join condition (and hence the restriction) should be pushed down into both sides of the union to bring the cardinality limit from millions to 1. I'm imagining a rewrite like this: \n>> \tR(a) J (b U c) -> (b J R(a)) U (c J R(a)) ...where R = Restrict, J \n>> = Join, U = Union\n\n>[ eyes that suspiciously ... ] I'm not convinced that such a transformation is either correct in general (you seem to be assuming at least that A's join column is unique, and >what is the UNION operator supposed to do with A's other columns?) or likely to lead to a performance improvement in general.\n\nIf there are more columns, you are correct that you might have to project off any additional columns within the union, and leave the join outside of the union intact to bring in the extra columns. Those are essentially the same considerations as when making other rewrites though. As for this optimization making unions faster in general, I would argue that it is rather easy to produce a plan superior to complete materialization of the union.\n\n>We possibly could push down a join condition on the inner side of a nestloop, similarly to what's done in the UNION ALL case ... but that would require a complete >refactoring of what the planner does with UNIONs. By and large, very little optimization effort has been put into non-ALL UNION (or INTERSECT or EXCEPT). You should >not expect that to change on a time scale of less than years.\n\nI hate to come across as contrary, but I'm pretty shocked by this answer for a couple reasons:\n1) This is a clear-cut case of an untenable execution plan, essentially a bug in the planner. This response contradicts the widely broadcast assertion that the PG community fixes planner bugs quickly and will not introduce hints because they would rather address these kinds of issues \"correctly\".\n2) Why would more effort go into Union All rather than Union? Are people using Union All more than Union, and if so is this because they actually want duplicates or is it because they've been trained to due to the performance problems with Union? Union All, in many people's opinions, shouldn't even exist in a true relational sense.\n\nAgain, sorry if I'm coming off as abrasive, I've spent political capital pushing to get PG in on this project, and now I'm a little worried about whether it is going to work for this kind of scale and complexity, so I'm a little stressed. I do appreciate your responses.\n\nBest,\n\n-Nate\n\n\n",
"msg_date": "Mon, 9 Jul 2012 04:02:23 +0000",
"msg_from": "Nate Allan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Terrible plan for join to nested union"
},
{
"msg_contents": "Nate Allan <[email protected]> writes:\n> 2) Why would more effort go into Union All rather than Union?\n\nThe UNION ALL case matches up with, and shares planning and execution\ncode with, table-inheritance and partitioning scenarios. So yes, it\nreally is more interesting to more people than UNION DISTINCT.\n(IIRC, the code that does that stuff was originally meant to support the\ninheritance case, and we hacked UNION ALL to be able to share the logic,\nnot vice versa.)\n\nRight now, UNION DISTINCT, along with INTERSECT and EXCEPT, have\nbasically no optimization support whatsoever: all of them go through a\ncode path that just evaluates both input relations and performs the\nset-combination operation. All of that code dates from a period about\na dozen years ago when we were more interested in getting the right\nanswer at all than how fast it was. Rewriting it all to have some\noptimization capability is certainly on the wish-list ... but the fact\nthat it hasn't risen to the top of anybody's to-do list in that time\nindicates to me that it probably isn't going to get done in the next\nlittle while either. And even if someone were to start working on it\nright now, it's not a small project.\n\nSorry to be the bearer of bad news, but this isn't going to change\njust because you try to label it a bug.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Jul 2012 01:49:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Terrible plan for join to nested union"
},
{
"msg_contents": ">Right now, UNION DISTINCT, along with INTERSECT and EXCEPT, have basically no optimization support whatsoever...\n> Sorry to be the bearer of bad news, but this isn't going to change just because you try to label it a bug.\n\nGiven the medium, I'll try not to read that in a snarky tone, after all, surely it's not unreasonable to label it a defect for a system not to optimize one of the basic relational primitives. That said, I know well the annoyance when a user cries bug when the system is working as-designed. In any case, I'm at least glad to have resolution; I know that there is no choice but to work around it.\n\nFor a maximally general work-around given that the union is the essence of a reused view, perhaps a reasonable approach is to switch to Union All and nest it within a Distinct outer query. That seems to produce workable plans in my tests so far. Maybe that could even form the basis of a planner enhancement that wouldn't require a complete refactor.\n\nThanks again,\n\n-Nate\n\n\n",
"msg_date": "Mon, 9 Jul 2012 06:20:41 +0000",
"msg_from": "Nate Allan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Terrible plan for join to nested union"
}
] |
[
{
"msg_contents": "Hi Andrew,\n\nSure... We are sending data in Json to clients\n\n{\ntotal:6784,\ndata:[50 rows for desired page]\n}\n\nSELECT count(*) FROM table - without where, without joins used to have\nbad performance... However, in real scenario we have never had the case\nwithout joins and where... Join columns are always indexed, and we\nalways try to put indexes on columns what will mostly be used in where\nin usual queries...\n\nSo far we haven't met performance problems...\n\nBut to be honest with you, total info very rarely in our responses is\nbigger then 10k, and mainly is less then 1k... what is really small\nnumber todays.. (even tables have few million rows, but restrictions\nalways reduce \"desired\" total data on less then 1000...)\n\nWhen users want to work on something on every day basis... Usually they\nwant \"immediatly\", things, what are just for them...draft things on\nwhat they worked in last few days, or assigned just to them etc etc...\n\nWhen they need to pass trough some process once a month... And\nperformance is \"slow\" - usually they don't bother... Every day tasks is\nwhat is important and what we care about to have good performance...\n\n\nIn very rarely cases, when we know, performance must be slow from many\nreasons - we are lying :) - return first page, (hopefully with data\nwhat user looking for), and return 1000 as total... Return result to\nuser, and async run CalculateTotalForThisCaseAndCache it... On first\nnext request for the same thing (but other page) if calculation is\ndone, return results from cache (with real total number)... But it is\nreally on very exceptional basis then on regular...\n\nCheers\n\nMisa\n\nSent from my Windows Phone\nFrom: Andrew Dunstan\nSent: 09/07/2012 19:47\nTo: [email protected]\nSubject: Re: [PERFORM] Paged Query\n\nOn 07/09/2012 01:41 PM, Misa Simic wrote:\n>\n>\n> From my experience users even very rare go to ending pages... easier\n> to them would be to sort data by field to get those rows in very first\n> pages...\n>\n>\n\n\nYeah, the problem really is that most client code wants to know how many\npages there are, even if it only wants one page right now.\n\ncheers\n\nandrew\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 9 Jul 2012 15:24:31 -0700",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "On 07/10/2012 06:24 AM, Misa Simic wrote:\n> Hi Andrew,\n>\n> Sure... We are sending data in Json to clients\n>\n> {\n> total:6784,\n> data:[50 rows for desired page]\n> }\n>\n> SELECT count(*) FROM table - without where, without joins used to have\n> bad performance... However, in real scenario we have never had the case\n> without joins and where... Join columns are always indexed, and we\n> always try to put indexes on columns what will mostly be used in where\n> in usual queries...\n\nWhen/if you do need a count of a single table without any filters, a \ncommon trick is to use table statistics to return an approximation. If \nyour autovaccum is running regularly it's usually a very good \napproximation, too.\n\nSounds like this hack may become unnecessary in 9.2 though.\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 10 Jul 2012 07:50:29 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
},
{
"msg_contents": "On Mon, Jul 9, 2012 at 4:50 PM, Craig Ringer <[email protected]> wrote:\n>\n>\n> When/if you do need a count of a single table without any filters, a common\n> trick is to use table statistics to return an approximation. If your\n> autovaccum is running regularly it's usually a very good approximation, too.\n>\n> Sounds like this hack may become unnecessary in 9.2 though.\n\nIndex only scans in 9.2 are nice, but they don't fundamentally change\nthis type of thing.\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 9 Jul 2012 17:34:14 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Paged Query"
}
] |
[
{
"msg_contents": "Howdy!\n\nI'm trying to figure out why checkpointing it completely pegging my I/O under moderate to high write load, \nI'm on PG9.1.1, RHEL 6.2 x64\n\ncheckpoint_completion_target = 0.7\ncheckpoint_timeout = 10m\n\nJul 10 00:32:30 perf01 postgres[52619]: [1895-1] user=,db= LOG: checkpoint starting: time\n[...]\nJul 10 00:36:47 perf01 postgres[52619]: [1896-1] user=,db= LOG: checkpoint complete: wrote 119454 buffers (11.4%); 0 transaction log file(s) added, 0 removed\n\nWatching my I/O with: iostat -t -d -x dm-2 5 \nWhich is my $PGDATA mount point (ext4). \nI get the following:\nDate\t\t\tr/s\tw/s\trsec/s\twsec/s\t\tawait\tsvctm\t%util\n[...]\n07/10/12 00:35:36\t0\t69.8\t0\t2233.6\t\t0.63\t0.07\t0.46\n07/10/12 00:35:41\t1.2\t810\t99.2\t22200\t\t4.13\t0.05\t4.02\n07/10/12 00:35:46\t0\t111.6\t0\t5422.4\t\t1.82\t0.08\t0.9\n07/10/12 00:35:51\t0\t299.2\t0\t5670.4\t\t1.27\t0.04\t1.24\n07/10/12 00:35:56\t0.8\t176.6\t41.6\t3654.4\t\t2.16\t0.07\t1.32\n07/10/12 00:36:01\t0\t364.8\t0\t6670.4\t\t1.1\t0.04\t1.62\n07/10/12 00:36:06\t0.8\t334.6\t12.8\t5953.6\t\t1.18\t0.05\t1.64\n07/10/12 00:36:11\t0\t118.6\t0\t6948.8\t\t1.82\t0.07\t0.82\n07/10/12 00:36:16\t0\t8274.6\t0\t148764.8\t10.55\t0.07\t61.18\n07/10/12 00:36:21\t0.2\t8577.4\t3.2\t161806.4\t16.68\t0.12\t99.62\n07/10/12 00:36:26\t0.8\t9244.6\t12.8\t167841.6\t15.01\t0.11\t99.82\n07/10/12 00:36:31\t0.8\t9434.2\t44.8\t208156.8\t16.22\t0.11\t99.7\n07/10/12 00:36:36\t0\t9582.8\t0\t202508.8\t14.84\t0.1\t99.72\n07/10/12 00:36:41\t0\t9830.2\t0\t175326.4\t14.42\t0.1\t99.5\n07/10/12 00:36:46\t0\t8208.6\t0\t149372.8\t17.82\t0.12\t99.64\n07/10/12 00:36:51\t3\t1438.4\t102.4\t26748.8\t\t8.49\t0.12\t18\n07/10/12 00:36:56\t0.6\t2004.6\t9.6\t27400\t\t1.25\t0.03\t5.74\n07/10/12 00:37:01\t0.6\t1723\t9.6\t23758.4\t\t1.85\t0.03\t5.08\n07/10/12 00:37:06\t0.4\t181.2\t35.2\t2928\t\t1.49\t0.06\t1.06\n\nThe ramp up is barely using any I/O, but then just before the checkpoint ends I get a \nflood of I/O all at once. \n\nI thought that the idea of checkpoint_completion_target was that we try to finish writing\nout the data throughout the entire checkpoint (leaving some room to spare, in my case 30%\nof the total estimated checkpoint time)\n\nBut what appears to be happening is that all of the data is being written out at the end of the checkpoint.\n\nThis happens at every checkpoint while the system is under load.\n\nI get the feeling that this isn't the correct behavior and i've done something wrong. \n\nAlso, I didn't see this sort of behavior in PG 8.3, however unfortunately, I don't have data to back that \nstatement up.\n\nAny suggestions. I'm willing and able to profile, or whatever.\n\nThanks\n\n\nHowdy!I'm trying to figure out why checkpointing it completely pegging my I/O under moderate to high write load, I'm on PG9.1.1, RHEL 6.2 x64checkpoint_completion_target = 0.7checkpoint_timeout = 10mJul 10 00:32:30 perf01 postgres[52619]: [1895-1] user=,db= LOG: checkpoint starting: time[...]Jul 10 00:36:47 perf01 postgres[52619]: [1896-1] user=,db= LOG: checkpoint complete: wrote 119454 buffers (11.4%); 0 transaction log file(s) added, 0 removedWatching my I/O with: iostat -t -d -x dm-2 5 Which is my $PGDATA mount point (ext4). I get the following:Date r/s w/s rsec/s wsec/s await svctm %util[...]07/10/12 00:35:36 0 69.8 0 2233.6 0.63 0.07 0.4607/10/12 00:35:41 1.2 810 99.2 22200 4.13 0.05 4.0207/10/12 00:35:46 0 111.6 0 5422.4 1.82 0.08 0.907/10/12 00:35:51 0 299.2 0 5670.4 1.27 0.04 1.2407/10/12 00:35:56 0.8 176.6 41.6 3654.4 2.16 0.07 1.3207/10/12 00:36:01 0 364.8 0 6670.4 1.1 0.04 1.6207/10/12 00:36:06 0.8 334.6 12.8 5953.6 1.18 0.05 1.6407/10/12 00:36:11 0 118.6 0 6948.8 1.82 0.07 0.8207/10/12 00:36:16 0 8274.6 0 148764.8 10.55 0.07 61.1807/10/12 00:36:21 0.2 8577.4 3.2 161806.4 16.68 0.12 99.6207/10/12 00:36:26 0.8 9244.6 12.8 167841.6 15.01 0.11 99.8207/10/12 00:36:31 0.8 9434.2 44.8 208156.8 16.22 0.11 99.707/10/12 00:36:36 0 9582.8 0 202508.8 14.84 0.1 99.7207/10/12 00:36:41 0 9830.2 0 175326.4 14.42 0.1 99.507/10/12 00:36:46 0 8208.6 0 149372.8 17.82 0.12 99.6407/10/12 00:36:51 3 1438.4 102.4 26748.8 8.49 0.12 1807/10/12 00:36:56 0.6 2004.6 9.6 27400 1.25 0.03 5.7407/10/12 00:37:01 0.6 1723 9.6 23758.4 1.85 0.03 5.0807/10/12 00:37:06 0.4 181.2 35.2 2928 1.49 0.06 1.06The ramp up is barely using any I/O, but then just before the checkpoint ends I get a flood of I/O all at once. I thought that the idea of checkpoint_completion_target was that we try to finish writingout the data throughout the entire checkpoint (leaving some room to spare, in my case 30%of the total estimated checkpoint time)But what appears to be happening is that all of the data is being written out at the end of the checkpoint.This happens at every checkpoint while the system is under load.I get the feeling that this isn't the correct behavior and i've done something wrong. Also, I didn't see this sort of behavior in PG 8.3, however unfortunately, I don't have data to back that statement up.Any suggestions. I'm willing and able to profile, or whatever.Thanks",
"msg_date": "Mon, 9 Jul 2012 22:39:35 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Massive I/O spikes during checkpoint"
},
{
"msg_contents": ">\n>\n>\n> But what appears to be happening is that all of the data is being written\n> out at the end of the checkpoint.\n>\n> This happens at every checkpoint while the system is under load.\n>\n> I get the feeling that this isn't the correct behavior and i've done\n> something wrong.\n>\n>\n>\nIt's not an actual checkpoints.\nIt's is a fsync after checkpoint which create write spikes hurting server.\n\nYou should set sysctl vm.dirty_background_bytes and vm.dirty_bytes to\nreasonable low values\n(for 512MB raid controller with cache I would suggest to sometning like\nvm.dirty_background_bytes = 33554432\nvm.dirty_bytes = 268435456\n32MB and 256MB respectively)\n\nIf youre server doesn't have raid with BBU cache - then you should tune\nthese values to much lower values.\n\nPlease read http://blog.2ndquadrant.com/tuning_linux_for_low_postgresq/\nand related posts.\n\n-- \nMaxim Boguk\nSenior Postgresql DBA.\nhttp://www.postgresql-consulting.com/\n\nPhone RU: +7 910 405 4718\nPhone AU: +61 45 218 5678\n\nSkype: maxim.boguk\nJabber: [email protected]\nМойКруг: http://mboguk.moikrug.ru/\n\n\"People problems are solved with people.\nIf people cannot solve the problem, try technology.\nPeople will then wish they'd listened at the first stage.\"\n\nBut what appears to be happening is that all of the data is being written out at the end of the checkpoint.\nThis happens at every checkpoint while the system is under load.I get the feeling that this isn't the correct behavior and i've done something wrong. \nIt's not an actual checkpoints.It's is a fsync after checkpoint which create write spikes hurting server.You should set sysctl vm.dirty_background_bytes and vm.dirty_bytes to reasonable low values\n\n(for 512MB raid controller with cache I would suggest to sometning likevm.dirty_background_bytes = 33554432 vm.dirty_bytes = 26843545632MB and 256MB respectively)If youre server doesn't have raid with BBU cache - then you should tune these values to much lower values.\nPlease read http://blog.2ndquadrant.com/tuning_linux_for_low_postgresq/ and related posts.-- Maxim BogukSenior Postgresql DBA.\nhttp://www.postgresql-consulting.com/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678Skype: maxim.bogukJabber: [email protected]\n\nМойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. If people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Tue, 10 Jul 2012 15:51:32 +1000",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive I/O spikes during checkpoint"
},
{
"msg_contents": "On Mon, Jul 9, 2012 at 10:39 PM, David Kerr <[email protected]> wrote:\n>\n> I thought that the idea of checkpoint_completion_target was that we try to\n> finish writing\n> out the data throughout the entire checkpoint (leaving some room to spare,\n> in my case 30%\n> of the total estimated checkpoint time)\n>\n> But what appears to be happening is that all of the data is being written\n> out at the end of the checkpoint.\n\nPostgres is writing data out to the kernel throughout the checkpoint.\nBut the kernel is just buffering it up dirty, until the end of the\ncheckpoint when the fsyncs start landing like bombs.\n\n>\n> This happens at every checkpoint while the system is under load.\n>\n> I get the feeling that this isn't the correct behavior and i've done\n> something wrong.\n>\n> Also, I didn't see this sort of behavior in PG 8.3, however unfortunately, I\n> don't have data to back that\n> statement up.\n\nDid you have less RAM back when you were running PG 8.3?\n\n> Any suggestions. I'm willing and able to profile, or whatever.\n\nWho much RAM do you have? What are your settings for /proc/sys/vm/dirty_* ?\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 9 Jul 2012 22:52:59 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive I/O spikes during checkpoint"
},
{
"msg_contents": "\nOn Jul 9, 2012, at 10:52 PM, Jeff Janes wrote:\n\n> On Mon, Jul 9, 2012 at 10:39 PM, David Kerr <[email protected]> wrote:\n>> \n>> I thought that the idea of checkpoint_completion_target was that we try to\n>> finish writing\n>> out the data throughout the entire checkpoint (leaving some room to spare,\n>> in my case 30%\n>> of the total estimated checkpoint time)\n>> \n>> But what appears to be happening is that all of the data is being written\n>> out at the end of the checkpoint.\n> \n> Postgres is writing data out to the kernel throughout the checkpoint.\n> But the kernel is just buffering it up dirty, until the end of the\n> checkpoint when the fsyncs start landing like bombs.\n\nAhh. duh!\n\nI guess i assumed that the point of spreading the checkpoint I/O was \nspreading the syncs out. \n\n> \n>> \n>> This happens at every checkpoint while the system is under load.\n>> \n>> I get the feeling that this isn't the correct behavior and i've done\n>> something wrong.\n>> \n>> Also, I didn't see this sort of behavior in PG 8.3, however unfortunately, I\n>> don't have data to back that\n>> statement up.\n> \n> Did you have less RAM back when you were running PG 8.3?\nnope. I was on RHEL 5.5 back then though.\n\n> \n>> Any suggestions. I'm willing and able to profile, or whatever.\n> \n> Who much RAM do you have? What are your settings for /proc/sys/vm/dirty_* ?\n\n256G \nand I've been running with this for a while now, but I think that's the default in RHEL 6+\necho 10 > /proc/sys/vm/dirty_ratio \necho 5 >/proc/sys/vm/dirty_background_ratio\n\n\n",
"msg_date": "Mon, 9 Jul 2012 22:59:04 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive I/O spikes during checkpoint"
},
{
"msg_contents": "On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote:\n\n> \n> \n> But what appears to be happening is that all of the data is being written out at the end of the checkpoint.\n> \n> This happens at every checkpoint while the system is under load.\n> \n> I get the feeling that this isn't the correct behavior and i've done something wrong. \n> \n> \n> \n> It's not an actual checkpoints.\n> It's is a fsync after checkpoint which create write spikes hurting server.\n> You should set sysctl vm.dirty_background_bytes and vm.dirty_bytes to reasonable low values\n\nSo use bla_bytes instead of bla_ratio?\n\n> (for 512MB raid controller with cache I would suggest to sometning like\n> vm.dirty_background_bytes = 33554432 \n> vm.dirty_bytes = 268435456\n> 32MB and 256MB respectively)\n\nI'll take a look.\n\n> \n> If youre server doesn't have raid with BBU cache - then you should tune these values to much lower values.\n> \n> Please read http://blog.2ndquadrant.com/tuning_linux_for_low_postgresq/ \n> and related posts.\n\nyeah, I saw that I guess I didn't put 2+2 together. thanks.\n\n\n\n\nOn Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote:But what appears to be happening is that all of the data is being written out at the end of the checkpoint.\nThis happens at every checkpoint while the system is under load.I get the feeling that this isn't the correct behavior and i've done something wrong. \nIt's not an actual checkpoints.It's is a fsync after checkpoint which create write spikes hurting server.You should set sysctl vm.dirty_background_bytes and vm.dirty_bytes to reasonable low valuesSo use bla_bytes instead of bla_ratio?\n\n(for 512MB raid controller with cache I would suggest to sometning likevm.dirty_background_bytes = 33554432 vm.dirty_bytes = 26843545632MB and 256MB respectively)I'll take a look.If youre server doesn't have raid with BBU cache - then you should tune these values to much lower values.\nPlease read http://blog.2ndquadrant.com/tuning_linux_for_low_postgresq/ and related posts.yeah, I saw that I guess I didn't put 2+2 together. thanks.",
"msg_date": "Mon, 9 Jul 2012 23:03:23 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive I/O spikes during checkpoint"
},
{
"msg_contents": "On Tue, Jul 10, 2012 at 4:03 PM, David Kerr <[email protected]> wrote:\n\n>\n> On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote:\n>\n>\n>>\n>> But what appears to be happening is that all of the data is being written\n>> out at the end of the checkpoint.\n>>\n>> This happens at every checkpoint while the system is under load.\n>>\n>> I get the feeling that this isn't the correct behavior and i've done\n>> something wrong.\n>>\n>>\n>>\n> It's not an actual checkpoints.\n> It's is a fsync after checkpoint which create write spikes hurting server.\n>\n> You should set sysctl vm.dirty_background_bytes and vm.dirty_bytes to\n> reasonable low values\n>\n>\n> So use bla_bytes instead of bla_ratio?\n>\n>\nYes because on 256GB server\necho 10 > /proc/sys/vm/dirty_ratio\nis equivalent to 26Gb dirty_bytes\n\nand\necho 5 >/proc/sys/vm/dirty_background_ratio\nis equivalent to 13Gb dirty_background_bytes\n\nIt is really huge values.\n\nSo kernel doesn't start write any pages out in background before it has at\nleast 13Gb dirty pages in kernel memory.\nAnd at end of the checkpoint kernel trying flush all dirty pages to disk.\n\nEven echo 1 >/proc/sys/vm/dirty_background_ratio is too high value for\ncontemporary server.\nThat is why *_bytes controls added to kernel.\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttp://www.postgresql-consulting.com/\n\nOn Tue, Jul 10, 2012 at 4:03 PM, David Kerr <[email protected]> wrote:\nOn Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote:\nBut what appears to be happening is that all of the data is being written out at the end of the checkpoint.\nThis happens at every checkpoint while the system is under load.I get the feeling that this isn't the correct behavior and i've done something wrong. \nIt's not an actual checkpoints.It's is a fsync after checkpoint which create write spikes hurting server.You should set sysctl vm.dirty_background_bytes and vm.dirty_bytes to reasonable low values\nSo use bla_bytes instead of bla_ratio?Yes because on 256GB server echo 10 > /proc/sys/vm/dirty_ratiois equivalent to 26Gb dirty_bytes\nandecho 5 >/proc/sys/vm/dirty_background_ratiois equivalent to 13Gb dirty_background_bytesIt is really huge values.So kernel doesn't start write any pages out in background before it has at least 13Gb dirty pages in kernel memory.\n\nAnd at end of the checkpoint kernel trying flush all dirty pages to disk.Even echo 1 >/proc/sys/vm/dirty_background_ratio is too high value for contemporary server.That is why *_bytes controls added to kernel.\n-- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.com/",
"msg_date": "Tue, 10 Jul 2012 16:14:00 +1000",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive I/O spikes during checkpoint"
},
{
"msg_contents": "On 7/9/2012 11:14 PM, Maxim Boguk wrote:\n>\n>\n> On Tue, Jul 10, 2012 at 4:03 PM, David Kerr <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n>\n> On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote:\n>\n>>\n>>\n>> But what appears to be happening is that all of the data is\n>> being written out at the end of the checkpoint.\n>>\n>> This happens at every checkpoint while the system is under load.\n>>\n>> I get the feeling that this isn't the correct behavior and\n>> i've done something wrong.\n>>\n>>\n>>\n>> It's not an actual checkpoints.\n>> It's is a fsync after checkpoint which create write spikes hurting\n>> server.\n>> You should set sysctl vm.dirty_background_bytes and vm.dirty_bytes\n>> to reasonable low values\n>\n> So use bla_bytes instead of bla_ratio?\n>\n>\n> Yes because on 256GB server\n> echo 10 > /proc/sys/vm/dirty_ratio\n> is equivalent to 26Gb dirty_bytes\n>\n> and\n> echo 5 >/proc/sys/vm/dirty_background_ratio\n> is equivalent to 13Gb dirty_background_bytes\n>\n> It is really huge values.\n<sigh> yeah, I never bothered to think that through.\n\n> So kernel doesn't start write any pages out in background before it has\n> at least 13Gb dirty pages in kernel memory.\n> And at end of the checkpoint kernel trying flush all dirty pages to disk.\n>\n> Even echo 1 >/proc/sys/vm/dirty_background_ratio is too high value for\n> contemporary server.\n> That is why *_bytes controls added to kernel.\n\nAwesome, Thanks.\n",
"msg_date": "Mon, 09 Jul 2012 23:17:48 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive I/O spikes during checkpoint"
},
{
"msg_contents": "On Tuesday, July 10, 2012 08:14:00 AM Maxim Boguk wrote:\n> On Tue, Jul 10, 2012 at 4:03 PM, David Kerr <[email protected]> wrote:\n> > On Jul 9, 2012, at 10:51 PM, Maxim Boguk wrote:\n> >> But what appears to be happening is that all of the data is being\n> >> written out at the end of the checkpoint.\n> >> \n> >> This happens at every checkpoint while the system is under load.\n> >> \n> >> I get the feeling that this isn't the correct behavior and i've done\n> >> something wrong.\n> > \n> > It's not an actual checkpoints.\n> > It's is a fsync after checkpoint which create write spikes hurting\n> > server.\n> > \n> > You should set sysctl vm.dirty_background_bytes and vm.dirty_bytes to\n> > reasonable low values\n> > \n> > \n> > So use bla_bytes instead of bla_ratio?\n> \n> Yes because on 256GB server\n> echo 10 > /proc/sys/vm/dirty_ratio\n> is equivalent to 26Gb dirty_bytes\n> \n> and\n> echo 5 >/proc/sys/vm/dirty_background_ratio\n> is equivalent to 13Gb dirty_background_bytes\n> \n> It is really huge values.\n> \n> So kernel doesn't start write any pages out in background before it has at\n> least 13Gb dirty pages in kernel memory.\n> And at end of the checkpoint kernel trying flush all dirty pages to disk.\nThast not entirely true. The kernel will also writeout pages which haven't \nbeen written to for dirty_expire_centisecs.\n\nBut yes, adjusting dirty_* is definitely a good idea.\n\nAndres\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n",
"msg_date": "Tue, 10 Jul 2012 14:44:14 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive I/O spikes during checkpoint"
},
{
"msg_contents": "On Tue, Jul 10, 2012 at 5:44 AM, Andres Freund <[email protected]> wrote:\n> On Tuesday, July 10, 2012 08:14:00 AM Maxim Boguk wrote:\n>>\n>> So kernel doesn't start write any pages out in background before it has at\n>> least 13Gb dirty pages in kernel memory.\n>> And at end of the checkpoint kernel trying flush all dirty pages to disk.\n\n> Thast not entirely true. The kernel will also writeout pages which haven't\n> been written to for dirty_expire_centisecs.\n\nThere seems to be many situations in which it totally fails to do that.\n\nAlthough I've never been able to categorize just what those situations are.\n\nCheers,\n\nJeff\n",
"msg_date": "Tue, 10 Jul 2012 06:36:35 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive I/O spikes during checkpoint"
},
{
"msg_contents": "On Tuesday, July 10, 2012 03:36:35 PM Jeff Janes wrote:\n> On Tue, Jul 10, 2012 at 5:44 AM, Andres Freund <[email protected]> \nwrote:\n> > On Tuesday, July 10, 2012 08:14:00 AM Maxim Boguk wrote:\n> >> So kernel doesn't start write any pages out in background before it has\n> >> at least 13Gb dirty pages in kernel memory.\n> >> And at end of the checkpoint kernel trying flush all dirty pages to\n> >> disk.\n> > \n> > Thast not entirely true. The kernel will also writeout pages which\n> > haven't been written to for dirty_expire_centisecs.\n> \n> There seems to be many situations in which it totally fails to do that.\nTotally as in diry pages sitting around without any io activity? Or just not \nagressive enough?\n\nCurrently its a bit hard to speculate about all without specifying the kernel \nbecause there have been massive rewrites of all that stuff in several kernels \nin the last two years...\n\nAndres\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n",
"msg_date": "Tue, 10 Jul 2012 15:41:18 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive I/O spikes during checkpoint"
}
] |
[
{
"msg_contents": "Dear @,\n\nIs there any tool or some sort of script available, for PostgreSQL, which \ncan be used to measure scalability of an application's database. Or is \nthere any guideline on how to do this.\n\nI am a bit confused about the concept of measuring scalability of an \napplication's database.\n\nHow is the scalability measured? \n\nIs it like loading the DB with a bulk data volume and then do performance \ntesting by using tools like JMeter?\n\nCould any one kindly help me on this..\n\nThanks,\n Sreejith.\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nDear @,\n\nIs there any tool or some sort of script\navailable, for PostgreSQL, which can be used to measure scalability of\nan application's database. Or is there any guideline on how to do this.\n\nI am a bit confused about the concept\nof measuring scalability of an application's database.\n\nHow is the scalability measured? \n\nIs it like loading the DB with a bulk\ndata volume and then do performance testing by using tools like JMeter?\n\nCould any one kindly help me on this..\n\nThanks,\n Sreejith.=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you",
"msg_date": "Tue, 10 Jul 2012 13:51:06 +0530",
"msg_from": "Sreejith Balakrishnan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "On Tue, Jul 10, 2012 at 12:21 PM, Sreejith Balakrishnan\n<[email protected]> wrote:\n> Dear @,\n>\n> Is there any tool or some sort of script available, for PostgreSQL, which\n> can be used to measure scalability of an application's database. Or is there\n> any guideline on how to do this.\n\n\"scalability of an application's database\" can be understood either\nlike a relation of transactions per second to database size or like an\nability of database to be sharded/partitioned or may be like something\nelse.\n\nCould you please explain more specifically the original task?\nWhat is the goal of it?\n\n> I am a bit confused about the concept of measuring scalability of an\n> application's database.\n>\n> How is the scalability measured?\n>\n> Is it like loading the DB with a bulk data volume and then do performance\n> testing by using tools like JMeter?\n>\n> Could any one kindly help me on this..\n>\n> Thanks,\n> Sreejith.\n>\n> =====-----=====-----=====\n> Notice: The information contained in this e-mail\n> message and/or attachments to it may contain\n> confidential or privileged information. If you are\n> not the intended recipient, any dissemination, use,\n> review, distribution, printing or copying of the\n> information contained in this e-mail message\n> and/or attachments to it are strictly prohibited. If\n> you have received this communication in error,\n> please notify us by reply e-mail or telephone and\n> immediately and permanently delete the message\n> and any attachments. Thank you\n\n\n\n-- \nSergey Konoplev\n\na database and software architect\nhttp://www.linkedin.com/in/grayhemp\n\nJabber: [email protected] Skype: gray-hemp Phone: +79160686204\n",
"msg_date": "Fri, 13 Jul 2012 22:37:15 +0400",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "On Tue, Jul 10, 2012 at 10:21 AM, Sreejith Balakrishnan\n<[email protected]> wrote:\n> Is there any tool or some sort of script available, for PostgreSQL, which\n> can be used to measure scalability of an application's database. Or is there\n> any guideline on how to do this.\n>\n> I am a bit confused about the concept of measuring scalability of an\n> application's database.\n\nYou cannot measure scalability of a database as such. You need to\nknow the nature of the load (i.e. operations executed against the DB -\nhow many INSERT, UPDATE, DELETE and SELECT, against which tables and\nwith what frequency and criteria). And then, as Sergey said, you need\nto define whether you want to scale up the load or the size - or both.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Fri, 13 Jul 2012 22:06:55 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "Dear Sergev,\n\nWe have around 15 to 18 separate products.What we are told to do is to\ncheck the scalability of the underlying DB of each product (application).\nThat's the requirement.Nothing more was explained to us.That's why I said\nearlier that I am confused on how to approach this.\n\nRegards,\nSreejith.\nOn Jul 14, 2012 12:08 AM, \"Sergey Konoplev\" <[email protected]> wrote:\n\n> On Tue, Jul 10, 2012 at 12:21 PM, Sreejith Balakrishnan\n> <[email protected]> wrote:\n> > Dear @,\n> >\n> > Is there any tool or some sort of script available, for PostgreSQL, which\n> > can be used to measure scalability of an application's database. Or is\n> there\n> > any guideline on how to do this.\n>\n> \"scalability of an application's database\" can be understood either\n> like a relation of transactions per second to database size or like an\n> ability of database to be sharded/partitioned or may be like something\n> else.\n>\n> Could you please explain more specifically the original task?\n> What is the goal of it?\n>\n> > I am a bit confused about the concept of measuring scalability of an\n> > application's database.\n> >\n> > How is the scalability measured?\n> >\n> > Is it like loading the DB with a bulk data volume and then do performance\n> > testing by using tools like JMeter?\n> >\n> > Could any one kindly help me on this..\n> >\n> > Thanks,\n> > Sreejith.\n> >\n> > =====-----=====-----=====\n> > Notice: The information contained in this e-mail\n> > message and/or attachments to it may contain\n> > confidential or privileged information. If you are\n> > not the intended recipient, any dissemination, use,\n> > review, distribution, printing or copying of the\n> > information contained in this e-mail message\n> > and/or attachments to it are strictly prohibited. If\n> > you have received this communication in error,\n> > please notify us by reply e-mail or telephone and\n> > immediately and permanently delete the message\n> > and any attachments. Thank you\n>\n>\n>\n> --\n> Sergey Konoplev\n>\n> a database and software architect\n> http://www.linkedin.com/in/grayhemp\n>\n> Jabber: [email protected] Skype: gray-hemp Phone: +79160686204\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nDear Sergev,\nWe have around 15 to 18 separate products.What we are told to do is to check the scalability of the underlying DB of each product (application).\nThat's the requirement.Nothing more was explained to us.That's why I said earlier that I am confused on how to approach this.\nRegards,\n Sreejith.\nOn Jul 14, 2012 12:08 AM, \"Sergey Konoplev\" <[email protected]> wrote:\nOn Tue, Jul 10, 2012 at 12:21 PM, Sreejith Balakrishnan\n<[email protected]> wrote:\n> Dear @,\n>\n> Is there any tool or some sort of script available, for PostgreSQL, which\n> can be used to measure scalability of an application's database. Or is there\n> any guideline on how to do this.\n\n\"scalability of an application's database\" can be understood either\nlike a relation of transactions per second to database size or like an\nability of database to be sharded/partitioned or may be like something\nelse.\n\nCould you please explain more specifically the original task?\nWhat is the goal of it?\n\n> I am a bit confused about the concept of measuring scalability of an\n> application's database.\n>\n> How is the scalability measured?\n>\n> Is it like loading the DB with a bulk data volume and then do performance\n> testing by using tools like JMeter?\n>\n> Could any one kindly help me on this..\n>\n> Thanks,\n> Sreejith.\n>\n> =====-----=====-----=====\n> Notice: The information contained in this e-mail\n> message and/or attachments to it may contain\n> confidential or privileged information. If you are\n> not the intended recipient, any dissemination, use,\n> review, distribution, printing or copying of the\n> information contained in this e-mail message\n> and/or attachments to it are strictly prohibited. If\n> you have received this communication in error,\n> please notify us by reply e-mail or telephone and\n> immediately and permanently delete the message\n> and any attachments. Thank you\n\n\n\n--\nSergey Konoplev\n\na database and software architect\nhttp://www.linkedin.com/in/grayhemp\n\nJabber: [email protected] Skype: gray-hemp Phone: +79160686204\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 14 Jul 2012 06:51:07 +0530",
"msg_from": "B Sreejith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "Dear Robert,\n\nWe need to scale up both size and load.\nCould you please provide steps I need to follow.\n\nWarm regards,\nSreejith.\nOn Jul 14, 2012 1:37 AM, \"Robert Klemme\" <[email protected]> wrote:\n\n> On Tue, Jul 10, 2012 at 10:21 AM, Sreejith Balakrishnan\n> <[email protected]> wrote:\n> > Is there any tool or some sort of script available, for PostgreSQL, which\n> > can be used to measure scalability of an application's database. Or is\n> there\n> > any guideline on how to do this.\n> >\n> > I am a bit confused about the concept of measuring scalability of an\n> > application's database.\n>\n> You cannot measure scalability of a database as such. You need to\n> know the nature of the load (i.e. operations executed against the DB -\n> how many INSERT, UPDATE, DELETE and SELECT, against which tables and\n> with what frequency and criteria). And then, as Sergey said, you need\n> to define whether you want to scale up the load or the size - or both.\n>\n> Kind regards\n>\n> robert\n>\n> --\n> remember.guy do |as, often| as.you_can - without end\n> http://blog.rubybestpractices.com/\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nDear Robert,\nWe need to scale up both size and load.\nCould you please provide steps I need to follow.\nWarm regards,\n Sreejith.\nOn Jul 14, 2012 1:37 AM, \"Robert Klemme\" <[email protected]> wrote:\nOn Tue, Jul 10, 2012 at 10:21 AM, Sreejith Balakrishnan\n<[email protected]> wrote:\n> Is there any tool or some sort of script available, for PostgreSQL, which\n> can be used to measure scalability of an application's database. Or is there\n> any guideline on how to do this.\n>\n> I am a bit confused about the concept of measuring scalability of an\n> application's database.\n\nYou cannot measure scalability of a database as such. You need to\nknow the nature of the load (i.e. operations executed against the DB -\nhow many INSERT, UPDATE, DELETE and SELECT, against which tables and\nwith what frequency and criteria). And then, as Sergey said, you need\nto define whether you want to scale up the load or the size - or both.\n\nKind regards\n\nrobert\n\n--\nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 14 Jul 2012 06:56:38 +0530",
"msg_from": "B Sreejith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "On 07/14/2012 09:21 AM, B Sreejith wrote:\n>\n> Dear Sergev,\n>\n> We have around 15 to 18 separate products.What we are told to do is to \n> check the scalability of the underlying DB of each product (application).\n> That's the requirement.Nothing more was explained to us.That's why I \n> said earlier that I am confused on how to approach this.\n>\n\nSounds like your client / boss has a case of buzz-word-itis. \n\"Scalability\" means lots of different things:\n\n- How well it copes with growth of data sizes\n- How well it copes with growth of query rates / activity\n- How well it copes with larger user counts (may not be the same as prior)\n- Whether it's easily sharded onto multiple systems\n- Whether it has any locking choke-points that serialize common operations\n- ....\n\nPerhaps most importantly, your database is only as scalable as your \napplication's use of it. Two apps can use exactly the same database \nstructure, but one of them can struggle massively under load another one \nbarely notices. For example, if one app does this (pseudocode):\n\nSELECT id FROM customer WHERE ....\nFOR attribute IN customer\n SELECT :attribute.name FROM customer WHERE id = :customer.id\n IF attribute.is_changed THEN\n UPDATE customer SET :attribute.name = :attribute.new_value WHERE \nid = :customer.id\n END IF\n\nand another just does:\n\nUPDATE customer\nSET attribute1 = value1, attribute2 = value2, attribute3 = value3\nWHERE ....\n\n\nThe first will totally melt down under load that isn't significantly \ndifferent from idle as far as the second one is concerned.\n\nThat's a ridiculously bad example for the first app, but real examples \nthat aren't much better arise from badly tuned or badly written object \nrelational management systems. The classic \"N+1 selects\" problem and \nmassive inefficient multiple left outer joins are classics.\n\nThus, you can't really evaluate the scalability of the database under \nload separately from the application that's using it and the workload.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/14/2012 09:21 AM, B Sreejith\n wrote:\n\n\n\nDear Sergev,\nWe have around 15 to 18 separate products.What we are told to\n do is to check the scalability of the underlying DB of each\n product (application).\n That's the requirement.Nothing more was explained to us.That's\n why I said earlier that I am confused on how to approach this.\n\n\n\n Sounds like your client / boss has a case of buzz-word-itis.\n \"Scalability\" means lots of different things:\n\n - How well it copes with growth of data sizes\n - How well it copes with growth of query rates / activity\n - How well it copes with larger user counts (may not be the same as\n prior)\n - Whether it's easily sharded onto multiple systems\n - Whether it has any locking choke-points that serialize common\n operations\n - ....\n\n Perhaps most importantly, your database is only as scalable as your\n application's use of it. Two apps can use exactly the same database\n structure, but one of them can struggle massively under load another\n one barely notices. For example, if one app does this (pseudocode):\n\n SELECT id FROM customer WHERE ....\n FOR attribute IN customer\n SELECT :attribute.name FROM customer WHERE id = :customer.id\n IF attribute.is_changed THEN\n UPDATE customer SET :attribute.name = :attribute.new_value\n WHERE id = :customer.id\n END IF \n\n and another just does:\n\n UPDATE customer\n SET attribute1 = value1, attribute2 = value2, attribute3 = value3 \n WHERE ....\n\n\n The first will totally melt down under load that isn't significantly\n different from idle as far as the second one is concerned.\n\n That's a ridiculously bad example for the first app, but real\n examples that aren't much better arise from badly tuned or badly\n written object relational management systems. The classic \"N+1\n selects\" problem and massive inefficient multiple left outer joins\n are classics.\n\n Thus, you can't really evaluate the scalability of the database\n under load separately from the application that's using it and the\n workload.\n\n --\n Craig Ringer",
"msg_date": "Sat, 14 Jul 2012 13:05:54 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "On 07/14/2012 09:26 AM, B Sreejith wrote:\n>\n> Dear Robert,\n>\n> We need to scale up both size and load.\n> Could you please provide steps I need to follow.\n>\n\nFor load, first you need to build a representative sample of your \napplication's querying patterns by logging queries and analysing the \nlogs. Produce a load generator based on that data, set up a test copy of \nyour database, and start pushing the query rate up to see what happens.\n\nFor simpler loads you can write a transaction script for pgbench based \non your queries.\n\nFor size: Copy your data set, then start duplicating it with munged \ncopies. Repeat, then use the load generator you wrote for the first part \nto see how scaling the data up affects your queries. See if anything is \nunacceptably slow (the \"auto_explain\" module is useful here) and examine it.\n\nThe truth is that predicting how complex database driven apps will scale \nis insanely hard, because access patterns change as data sizes and user \ncounts grow. You're likely to land up tuning for a scenario that's quite \ndifferent to the one that you actually face when you start hitting \nscaling limitations. This doesn't mean you should not investigate, it \njust means your trials don't prove anything and the optimisations you \nmake based on what you learn may not gain you much.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/14/2012 09:26 AM, B Sreejith\n wrote:\n\n\n\nDear Robert,\nWe need to scale up both size and load.\n Could you please provide steps I need to follow.\n\n\n\n For load, first you need to build a representative sample of your\n application's querying patterns by logging queries and analysing the\n logs. Produce a load generator based on that data, set up a test\n copy of your database, and start pushing the query rate up to see\n what happens.\n\n For simpler loads you can write a transaction script for pgbench\n based on your queries.\n\n For size: Copy your data set, then start duplicating it with munged\n copies. Repeat, then use the load generator you wrote for the first\n part to see how scaling the data up affects your queries. See if\n anything is unacceptably slow (the \"auto_explain\" module is useful\n here) and examine it.\n\n The truth is that predicting how complex database driven apps will\n scale is insanely hard, because access patterns change as data sizes\n and user counts grow. You're likely to land up tuning for a scenario\n that's quite different to the one that you actually face when you\n start hitting scaling limitations. This doesn't mean you should not\n investigate, it just means your trials don't prove anything and the\n optimisations you make based on what you learn may not gain you\n much.\n\n --\n Craig Ringer",
"msg_date": "Sat, 14 Jul 2012 16:48:57 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "Hammerora is a good start but does have some issues when trying to get it\nstarted. You can also try PGBench. As someone said, there is a plethora of\nchoices. It all depends on what you want to measure or accomplish.\n\nJohn Jones\n\nOn Sat, Jul 14, 2012 at 1:48 AM, Craig Ringer <[email protected]> wrote:\n\n> On 07/14/2012 09:26 AM, B Sreejith wrote:\n>\n> Dear Robert,\n>\n> We need to scale up both size and load.\n> Could you please provide steps I need to follow.\n>\n>\n> For load, first you need to build a representative sample of your\n> application's querying patterns by logging queries and analysing the logs.\n> Produce a load generator based on that data, set up a test copy of your\n> database, and start pushing the query rate up to see what happens.\n>\n> For simpler loads you can write a transaction script for pgbench based on\n> your queries.\n>\n> For size: Copy your data set, then start duplicating it with munged\n> copies. Repeat, then use the load generator you wrote for the first part to\n> see how scaling the data up affects your queries. See if anything is\n> unacceptably slow (the \"auto_explain\" module is useful here) and examine it.\n>\n> The truth is that predicting how complex database driven apps will scale\n> is insanely hard, because access patterns change as data sizes and user\n> counts grow. You're likely to land up tuning for a scenario that's quite\n> different to the one that you actually face when you start hitting scaling\n> limitations. This doesn't mean you should not investigate, it just means\n> your trials don't prove anything and the optimisations you make based on\n> what you learn may not gain you much.\n>\n> --\n> Craig Ringer\n>\n\nHammerora is a good start but does have some issues when trying to get it started. You can also try PGBench. As someone said, there is a plethora of choices. It all depends on what you want to measure or accomplish.\nJohn JonesOn Sat, Jul 14, 2012 at 1:48 AM, Craig Ringer <[email protected]> wrote:\n\nOn 07/14/2012 09:26 AM, B Sreejith\n wrote:\n\n\nDear Robert,\nWe need to scale up both size and load.\n Could you please provide steps I need to follow.\n\n\n\n For load, first you need to build a representative sample of your\n application's querying patterns by logging queries and analysing the\n logs. Produce a load generator based on that data, set up a test\n copy of your database, and start pushing the query rate up to see\n what happens.\n\n For simpler loads you can write a transaction script for pgbench\n based on your queries.\n\n For size: Copy your data set, then start duplicating it with munged\n copies. Repeat, then use the load generator you wrote for the first\n part to see how scaling the data up affects your queries. See if\n anything is unacceptably slow (the \"auto_explain\" module is useful\n here) and examine it.\n\n The truth is that predicting how complex database driven apps will\n scale is insanely hard, because access patterns change as data sizes\n and user counts grow. You're likely to land up tuning for a scenario\n that's quite different to the one that you actually face when you\n start hitting scaling limitations. This doesn't mean you should not\n investigate, it just means your trials don't prove anything and the\n optimisations you make based on what you learn may not gain you\n much.\n\n --\n Craig Ringer",
"msg_date": "Sat, 14 Jul 2012 02:10:59 -0700",
"msg_from": "John Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "Dear All,\nThanks alot for all the invaluable comments.\nRegards,\n Sreejith.\nOn Jul 14, 2012 2:19 PM, \"Craig Ringer\" <[email protected]> wrote:\n\n> On 07/14/2012 09:26 AM, B Sreejith wrote:\n>\n> Dear Robert,\n>\n> We need to scale up both size and load.\n> Could you please provide steps I need to follow.\n>\n>\n> For load, first you need to build a representative sample of your\n> application's querying patterns by logging queries and analysing the logs.\n> Produce a load generator based on that data, set up a test copy of your\n> database, and start pushing the query rate up to see what happens.\n>\n> For simpler loads you can write a transaction script for pgbench based on\n> your queries.\n>\n> For size: Copy your data set, then start duplicating it with munged\n> copies. Repeat, then use the load generator you wrote for the first part to\n> see how scaling the data up affects your queries. See if anything is\n> unacceptably slow (the \"auto_explain\" module is useful here) and examine it.\n>\n> The truth is that predicting how complex database driven apps will scale\n> is insanely hard, because access patterns change as data sizes and user\n> counts grow. You're likely to land up tuning for a scenario that's quite\n> different to the one that you actually face when you start hitting scaling\n> limitations. This doesn't mean you should not investigate, it just means\n> your trials don't prove anything and the optimisations you make based on\n> what you learn may not gain you much.\n>\n> --\n> Craig Ringer\n>\n\nDear All,\nThanks alot for all the invaluable comments.\n Regards,\n Sreejith.\nOn Jul 14, 2012 2:19 PM, \"Craig Ringer\" <[email protected]> wrote:\n\nOn 07/14/2012 09:26 AM, B Sreejith\n wrote:\n\n\nDear Robert,\nWe need to scale up both size and load.\n Could you please provide steps I need to follow.\n\n\n\n For load, first you need to build a representative sample of your\n application's querying patterns by logging queries and analysing the\n logs. Produce a load generator based on that data, set up a test\n copy of your database, and start pushing the query rate up to see\n what happens.\n\n For simpler loads you can write a transaction script for pgbench\n based on your queries.\n\n For size: Copy your data set, then start duplicating it with munged\n copies. Repeat, then use the load generator you wrote for the first\n part to see how scaling the data up affects your queries. See if\n anything is unacceptably slow (the \"auto_explain\" module is useful\n here) and examine it.\n\n The truth is that predicting how complex database driven apps will\n scale is insanely hard, because access patterns change as data sizes\n and user counts grow. You're likely to land up tuning for a scenario\n that's quite different to the one that you actually face when you\n start hitting scaling limitations. This doesn't mean you should not\n investigate, it just means your trials don't prove anything and the\n optimisations you make based on what you learn may not gain you\n much.\n\n --\n Craig Ringer",
"msg_date": "Sat, 14 Jul 2012 15:20:01 +0530",
"msg_from": "B Sreejith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "On Sat, Jul 14, 2012 at 11:50 AM, B Sreejith <[email protected]> wrote:\n> Dear All,\n> Thanks alot for all the invaluable comments.\n\nAdditionally to Craig's excellent advice to measurements there's\nsomething else you can do: with the knowledge of the queries your\napplication fires against the database you can evaluate your schema\nand index definitions. While there is no guarantee that your\napplication will scale well if all indexes are present you believe\nneed to be present based on that inspection, you can pretty easily\nidentify tables with can be improved. These are tables which a) are\nknown to grow large and b) do not have indexes nor no indexes which\nsupport the queries your application does against these tables which\nwill result in full table scans. Any database which scales in size\nwill sooner or later hit a point where full table scans of these large\ntables will be extremely slow. If these queries are done during\nregular operation (and not nightly maintenance windows for example)\nthen you pretty surely have identified a show stopper.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Sat, 14 Jul 2012 14:17:48 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "> We have around 15 to 18 separate products.What we are told to do is to check\n> the scalability of the underlying DB of each product (application).\n>\n>> Sounds like your client / boss has a case of buzz-word-itis. \"Scalability\"\n>> means lots of different things:\n\nYes, it is still not clear what exactly they want from you, but that\nis what I would do...\n\nI would take the metrics Craig described. These ones:\n\n> - How well it copes with growth of data sizes\n> - How well it copes with growth of query rates / activity\n> - How well it copes with larger user counts (may not be the same as prior)\n- Also hard drives activity, CPU, etc\n\nAnd started to collect this statistics using monitoring tools like\nhttp://www.cacti.net/, for example.\n\nAfter a week/month/quarter, as time passes and the database activity\nand size changes, you will see how the measurements are changed\n(usually degraded). So you would be able to make conclusions on\nwhether your environment meets current requirements or not and to\nforecast critical points.\n\nAs Craig mentioned, you may also try to simulate your database\nactivity either with pgbench. I would just like to show you this\narticle http://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm\nwhere you will find some hints for your case.\n\nAlso look at the playback tools\nhttp://wiki.postgresql.org/wiki/Statement_Playback.\n\n-- \nSergey Konoplev\n\na database architect, software developer at PostgreSQL-Consulting.com\nhttp://www.postgresql-consulting.com\n\nJabber: [email protected] Skype: gray-hemp Phone: +79160686204\n",
"msg_date": "Sat, 14 Jul 2012 16:49:43 +0400",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
},
{
"msg_contents": "On 07/14/2012 08:17 PM, Robert Klemme wrote:\n> On Sat, Jul 14, 2012 at 11:50 AM, B Sreejith <[email protected]> wrote:\n>> Dear All,\n>> Thanks alot for all the invaluable comments.\n> Additionally to Craig's excellent advice to measurements there's\n> something else you can do: with the knowledge of the queries your\n> application fires against the database you can evaluate your schema\n> and index definitions. While there is no guarantee that your\n> application will scale well if all indexes are present\nDon't forget that sometimes it's better to DROP an index that isn't used \nmuch, or that only helps occasional queries that aren't time-sensitive. \nEvery index has a cost to maintain - it slows down your inserts and \nupdates and it competes for disk cache with things that might be more \nbeneficial.\n> b) do not have indexes nor no indexes which\n> support the queries your application does against these tables which\n> will result in full table scans.\nA full table scan is not inherently a bad thing, even for a huge table. \nSometimes you just need to examine every row, and the fastest way to do \nthat is without a doubt a full table scan.\n\nRemember, a full table scan won't tend to push everything out of \nshared_buffers, so it can also avoid competition for cache.\n\n(If anyone ever wants concurrent scans badly enough to implement them, \nfull table scans with effective_io_concurrency > 1 will become a *lot* \nfaster for some types of query).\n\n--\nCraig Ringer\n",
"msg_date": "Sat, 14 Jul 2012 23:10:52 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Any tool/script available which can be used to measure\n\tscalability of an application's database."
}
] |
[
{
"msg_contents": "Hi,\n\nI have searched solution to my problem a few days. On my query, there is big performance problem.\nIt seems to me, that problem is on where-part of sql and it's function.\n\nMy sql is:\nselect count(*)\n\t\tfrom table_h \n\t\twhere \n\t\t\tlevel <= get_level_value(11268,id,area) and \n\t\t\t(date1 >= '2011-1-1' or date2>='2011-1-1') and \n\t\t\t(date1 <= '2012-07-09' or date2<='2012-07-09')\nThis takes about 40sek.\n\nselect count(*)\n\t\tfrom table_h \n\t\twhere \n\t\t\t(date1 >= '2011-1-1' or date2>='2011-1-1') and \n\t\t\t(date1 <= '2012-07-09' or date2<='2012-07-09')\nwhen ignoring function, it takes <1sek.\n\nFunction is:\nCREATE OR REPLACE FUNCTION get_level_value(_user integer, _id, _area) RETURNS integer\n AS $$\nDECLARE found integer;\nBEGIN\n SELECT 1 INTO found\n FROM table_o\n WHERE userid=_user AND\n id=_id AND\n area=_area;\n IF (found) THEN\n return 3;\n ELSE\n return 1;\n END IF;\nEND;\n$$\nLANGUAGE plpgsql;\n\nOn explain, it seems to me that this function is on filter and it will execute on every row. Total resultset contains 1 700 000 rows.\nQUERY PLAN\nAggregate (cost=285543.89..285543.90 rows=1 width=0) (actual time=32391.380..32391.380 rows=1 loops=1)\n -> Bitmap Heap Scan on table_h (cost=11017.63..284987.40 rows=222596 width=0) (actual time=326.946..31857.145 rows=631818 loops=1)\n Recheck Cond: ((date1 >= '2011-01-01'::date) OR (date2 >= '2011-01-01'::date))\n Filter: (((date1 <= '2012-07-09'::date) OR (date2 <= '2012-07-09'::date)) AND (level <= get_level_value(11268, id, area)))\n -> BitmapOr (cost=11017.63..11017.63 rows=669412 width=0) (actual time=321.635..321.635 rows=0 loops=1)\n -> Bitmap Index Scan on date1 (cost=0.00..10626.30 rows=652457 width=0) (actual time=84.555..84.555 rows=647870 loops=1)\n Index Cond: (date1 >= '2011-01-01'::date)\n -> Bitmap Index Scan on date2_table_h (cost=0.00..280.03 rows=16955 width=0) (actual time=237.074..237.074 rows=15222 loops=1)\n Index Cond: (date2 >= '2011-01-01'::date)\n\nHow should I handle this situation and use function?\n\n--\nkupen\n\n-- \nWippies-vallankumous on t��ll�! Varmista paikkasi vallankumouksen eturintamassa ja liity Wippiesiin heti!\nhttp://www.wippies.com/\n\n",
"msg_date": "Tue, 10 Jul 2012 11:36:15 +0300 (EEST)",
"msg_from": "Pena Kupen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Custom function in where clause"
},
{
"msg_contents": "On Tue, Jul 10, 2012 at 6:36 PM, Pena Kupen <[email protected]> wrote:\n\n> Hi,\n>\n> I have searched solution to my problem a few days. On my query, there is\n> big performance problem.\n> It seems to me, that problem is on where-part of sql and it's function.\n>\n> My sql is:\n> select count(*)\n> from table_h where level <=\n> get_level_value(11268,id,area) and (date1 >= '2011-1-1'\n> or date2>='2011-1-1') and (date1 <= '2012-07-09' or\n> date2<='2012-07-09')\n> This takes about 40sek.\n>\n> select count(*)\n> from table_h where (date1 >=\n> '2011-1-1' or date2>='2011-1-1') and (date1 <=\n> '2012-07-09' or date2<='2012-07-09')\n> when ignoring function, it takes <1sek.\n>\n> Function is:\n> CREATE OR REPLACE FUNCTION get_level_value(_user integer, _id, _area)\n> RETURNS integer\n> AS $$\n> DECLARE found integer;\n> BEGIN\n> SELECT 1 INTO found\n> FROM table_o\n> WHERE userid=_user AND\n> id=_id AND\n> area=_area;\n> IF (found) THEN\n> return 3;\n> ELSE\n> return 1;\n> END IF;\n> END;\n> $$\n> LANGUAGE plpgsql;\n>\n> On explain, it seems to me that this function is on filter and it will\n> execute on every row. Total resultset contains 1 700 000 rows.\n> QUERY PLAN\n> Aggregate (cost=285543.89..285543.90 rows=1 width=0) (actual\n> time=32391.380..32391.380 rows=1 loops=1)\n> -> Bitmap Heap Scan on table_h (cost=11017.63..284987.40 rows=222596\n> width=0) (actual time=326.946..31857.145 rows=631818 loops=1)\n> Recheck Cond: ((date1 >= '2011-01-01'::date) OR (date2 >=\n> '2011-01-01'::date))\n> Filter: (((date1 <= '2012-07-09'::date) OR (date2 <=\n> '2012-07-09'::date)) AND (level <= get_level_value(11268, id, area)))\n> -> BitmapOr (cost=11017.63..11017.63 rows=669412 width=0) (actual\n> time=321.635..321.635 rows=0 loops=1)\n> -> Bitmap Index Scan on date1 (cost=0.00..10626.30\n> rows=652457 width=0) (actual time=84.555..84.555 rows=647870 loops=1)\n> Index Cond: (date1 >= '2011-01-01'::date)\n> -> Bitmap Index Scan on date2_table_h (cost=0.00..280.03\n> rows=16955 width=0) (actual time=237.074..237.074 rows=15222 loops=1)\n> Index Cond: (date2 >= '2011-01-01'::date)\n>\n> How should I handle this situation and use function?\n>\n>\nYou could not have good performance using function in case where direct\nJOIN is only way to have reasonable performance.\nStop using function and write join with table_o instead, or put whole query\nwith join inside a function.\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttp://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>\n\nPhone RU: +7 910 405 4718\nPhone AU: +61 45 218 5678\n\nSkype: maxim.boguk\nJabber: [email protected]\nМойКруг: http://mboguk.moikrug.ru/\n\n\"People problems are solved with people.\nIf people cannot solve the problem, try technology.\nPeople will then wish they'd listened at the first stage.\"\n\nOn Tue, Jul 10, 2012 at 6:36 PM, Pena Kupen <[email protected]> wrote:\n\nHi,\n\nI have searched solution to my problem a few days. On my query, there is big performance problem.\nIt seems to me, that problem is on where-part of sql and it's function.\n\nMy sql is:\nselect count(*)\n from table_h where level <= get_level_value(11268,id,area) and (date1 >= '2011-1-1' or date2>='2011-1-1') and (date1 <= '2012-07-09' or date2<='2012-07-09')\n\n\nThis takes about 40sek.\n\nselect count(*)\n from table_h where (date1 >= '2011-1-1' or date2>='2011-1-1') and (date1 <= '2012-07-09' or date2<='2012-07-09')\n\n\nwhen ignoring function, it takes <1sek.\n\nFunction is:\nCREATE OR REPLACE FUNCTION get_level_value(_user integer, _id, _area) RETURNS integer\n AS $$\nDECLARE found integer;\nBEGIN\n SELECT 1 INTO found\n FROM table_o\n WHERE userid=_user AND\n id=_id AND\n area=_area;\n IF (found) THEN\n return 3;\n ELSE\n return 1;\n END IF;\nEND;\n$$\nLANGUAGE plpgsql;\n\nOn explain, it seems to me that this function is on filter and it will execute on every row. Total resultset contains 1 700 000 rows.\nQUERY PLAN\nAggregate (cost=285543.89..285543.90 rows=1 width=0) (actual time=32391.380..32391.380 rows=1 loops=1)\n -> Bitmap Heap Scan on table_h (cost=11017.63..284987.40 rows=222596 width=0) (actual time=326.946..31857.145 rows=631818 loops=1)\n Recheck Cond: ((date1 >= '2011-01-01'::date) OR (date2 >= '2011-01-01'::date))\n Filter: (((date1 <= '2012-07-09'::date) OR (date2 <= '2012-07-09'::date)) AND (level <= get_level_value(11268, id, area)))\n -> BitmapOr (cost=11017.63..11017.63 rows=669412 width=0) (actual time=321.635..321.635 rows=0 loops=1)\n -> Bitmap Index Scan on date1 (cost=0.00..10626.30 rows=652457 width=0) (actual time=84.555..84.555 rows=647870 loops=1)\n Index Cond: (date1 >= '2011-01-01'::date)\n -> Bitmap Index Scan on date2_table_h (cost=0.00..280.03 rows=16955 width=0) (actual time=237.074..237.074 rows=15222 loops=1)\n Index Cond: (date2 >= '2011-01-01'::date)\n\nHow should I handle this situation and use function?You could not have good performance using function in case where direct JOIN is only way to have reasonable performance.Stop using function and write join with table_o instead, or put whole query with join inside a function.\n-- Maxim BogukSenior Postgresql DBAhttp://www.postgresql-consulting.ru/Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678\nSkype: maxim.bogukJabber: [email protected]МойКруг: http://mboguk.moikrug.ru/\"People problems are solved with people. \n\nIf people cannot solve the problem, try technology. People will then wish they'd listened at the first stage.\"",
"msg_date": "Tue, 10 Jul 2012 18:44:18 +1000",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Custom function in where clause"
},
{
"msg_contents": "On 10 July 2012 18:36, Pena Kupen <[email protected]> wrote:\n> Hi,\n>\n> I have searched solution to my problem a few days. On my query, there is big\n> performance problem.\n> It seems to me, that problem is on where-part of sql and it's function.\n>\n\n> How should I handle this situation and use function?\n>\n\nI would start by rewriting your function in plain SQL rather than\nPL/pgSQL. As a general rule, don't write a function in PL/pgSQL\nunless you really need procedurality. This function does not.\n\nFor example:\n\nCREATE OR REPLACE FUNCTION get_level_value(_user integer, _id, _area)\nRETURNS integer\n AS $$\n -- Return 3 if there are matching records in table_o, otherwise return 1.\n SELECT CASE WHEN EXISTS (\n SELECT id\n FROM table_o\n WHERE userid=_user AND\n id=_id AND\n area=_area\n ) THEN 3 ELSE 1 END;\n$$\nLANGUAGE sql STABLE;\n\nCheers,\nBJ\n",
"msg_date": "Tue, 10 Jul 2012 18:45:29 +1000",
"msg_from": "Brendan Jurd <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Custom function in where clause"
}
] |
[
{
"msg_contents": "Hi and thank's guys!\n\nFirst trying this Brendan's recommendation.\n\nIt seems only a small difference between sql and PL/pgSQL. from 40-->37. Not so good yet.\nI will try Maxim's little later and you all know.\n\n--\nkupen\n\nBrendan Jurd [[email protected]] kirjoitti: \n> On 10 July 2012 18:36, Pena Kupen <[email protected]> wrote:\n> > Hi,\n> >\n> > I have searched solution to my problem a few days. On my query, there is big\n> > performance problem.\n> > It seems to me, that problem is on where-part of sql and it's function.\n> >\n> \n> > How should I handle this situation and use function?\n> >\n> \n> I would start by rewriting your function in plain SQL rather than\n> PL/pgSQL. As a general rule, don't write a function in PL/pgSQL\n> unless you really need procedurality. This function does not.\n> \n> For example:\n> \n> CREATE OR REPLACE FUNCTION get_level_value(_user integer, _id, _area)\n> RETURNS integer\n> AS $$\n> -- Return 3 if there are matching records in table_o, otherwise return 1.\n> SELECT CASE WHEN EXISTS (\n> SELECT id\n> FROM table_o\n> WHERE userid=_user AND\n> id=_id AND\n> area=_area\n> ) THEN 3 ELSE 1 END;\n> $$\n> LANGUAGE sql STABLE;\n> \n> Cheers,\n> BJ\n> \n\n\n-- \nWippies-vallankumous on t��ll�! Varmista paikkasi vallankumouksen eturintamassa ja liity Wippiesiin heti!\nhttp://www.wippies.com/\n\n",
"msg_date": "Tue, 10 Jul 2012 12:30:38 +0300 (EEST)",
"msg_from": "Pena Kupen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Custom function in where clause"
}
] |
[
{
"msg_contents": "Hello again,\n\nSeems to be ok, by adding normal outer join and some fields on where-part.\n\nPrevious, I use to used with Oracle and Sybase databases as much as possible functions/procedures.\nThere ware something to do with performance: \"Do it on server, not in client\".\nTypically all programs were c/s, maybe that or am I missing something?\n\n--\nkupen\n\nMaxim Boguk [[email protected]] kirjoitti: \n> On Tue, Jul 10, 2012 at 6:36 PM, Pena Kupen <[email protected]> wrote:\n> \n> > Hi,\n> >\n> > I have searched solution to my problem a few days. On my query, there is\n> > big performance problem.\n> > It seems to me, that problem is on where-part of sql and it's function.\n> >\n> > My sql is:\n> > select count(*)\n> > from table_h where level <=\n> > get_level_value(11268,id,area) and (date1 >= '2011-1-1'\n> > or date2>='2011-1-1') and (date1 <= '2012-07-09' or\n> > date2<='2012-07-09')\n> > This takes about 40sek.\n> >\n> > select count(*)\n> > from table_h where (date1 >=\n> > '2011-1-1' or date2>='2011-1-1') and (date1 <=\n> > '2012-07-09' or date2<='2012-07-09')\n> > when ignoring function, it takes <1sek.\n> >\n> > Function is:\n> > CREATE OR REPLACE FUNCTION get_level_value(_user integer, _id, _area)\n> > RETURNS integer\n> > AS $$\n> > DECLARE found integer;\n> > BEGIN\n> > SELECT 1 INTO found\n> > FROM table_o\n> > WHERE userid=_user AND\n> > id=_id AND\n> > area=_area;\n> > IF (found) THEN\n> > return 3;\n> > ELSE\n> > return 1;\n> > END IF;\n> > END;\n> > $$\n> > LANGUAGE plpgsql;\n> >\n> > On explain, it seems to me that this function is on filter and it will\n> > execute on every row. Total resultset contains 1 700 000 rows.\n> > QUERY PLAN\n> > Aggregate (cost=285543.89..285543.90 rows=1 width=0) (actual\n> > time=32391.380..32391.380 rows=1 loops=1)\n> > -> Bitmap Heap Scan on table_h (cost=11017.63..284987.40 rows=222596\n> > width=0) (actual time=326.946..31857.145 rows=631818 loops=1)\n> > Recheck Cond: ((date1 >= '2011-01-01'::date) OR (date2 >=\n> > '2011-01-01'::date))\n> > Filter: (((date1 <= '2012-07-09'::date) OR (date2 <=\n> > '2012-07-09'::date)) AND (level <= get_level_value(11268, id, area)))\n> > -> BitmapOr (cost=11017.63..11017.63 rows=669412 width=0) (actual\n> > time=321.635..321.635 rows=0 loops=1)\n> > -> Bitmap Index Scan on date1 (cost=0.00..10626.30\n> > rows=652457 width=0) (actual time=84.555..84.555 rows=647870 loops=1)\n> > Index Cond: (date1 >= '2011-01-01'::date)\n> > -> Bitmap Index Scan on date2_table_h (cost=0.00..280.03\n> > rows=16955 width=0) (actual time=237.074..237.074 rows=15222 loops=1)\n> > Index Cond: (date2 >= '2011-01-01'::date)\n> >\n> > How should I handle this situation and use function?\n> >\n> >\n> You could not have good performance using function in case where direct\n> JOIN is only way to have reasonable performance.\n> Stop using function and write join with table_o instead, or put whole query\n> with join inside a function.\n> \n> -- \n> Maxim Boguk\n> Senior Postgresql DBA\n> http://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>\n> \n> Phone RU: +7 910 405 4718\n> Phone AU: +61 45 218 5678\n> \n> Skype: maxim.boguk\n> Jabber: [email protected]\n> \u001c>9\u001a@C3: http://mboguk.moikrug.ru/\n> \n> \"People problems are solved with people.\n> If people cannot solve the problem, try technology.\n> People will then wish they'd listened at the first stage.\"\n> \n\n\n\n\n-- \nWippies-vallankumous on t��ll�! Varmista paikkasi vallankumouksen eturintamassa ja liity Wippiesiin heti!\nhttp://www.wippies.com/\n\n",
"msg_date": "Tue, 10 Jul 2012 13:20:06 +0300 (EEST)",
"msg_from": "Pena Kupen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fw: Re: Custom function in where clause"
}
] |
[
{
"msg_contents": "Hi\n\nAfter seeing a few discussions here and on Stack Overflow I've put \ntogether a quick explanation of why \"DELETE FROM table;\" may be faster \nthan \"TRUNCATE table\" for people doing unit testing on lots of tiny \ntables, people who're doing this so often they care how long it takes.\n\nI'd love it if a few folks who know the guts were to take a look and \nverify its correctness:\n\nhttp://stackoverflow.com/a/11423886/398670\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 11 Jul 2012 08:37:24 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Tue, Jul 10, 2012 at 5:37 PM, Craig Ringer <[email protected]> wrote:\n> Hi\n>\n> After seeing a few discussions here and on Stack Overflow I've put together\n> a quick explanation of why \"DELETE FROM table;\" may be faster than \"TRUNCATE\n> table\" for people doing unit testing on lots of tiny tables, people who're\n> doing this so often they care how long it takes.\n>\n> I'd love it if a few folks who know the guts were to take a look and verify\n> its correctness:\n\nI haven't said this before, but think it every time someone asks me\nabout this, so I'll say it now:\n\nThis is a papercut that should be solved with improved mechanics.\nTRUNCATE should simply be very nearly the fastest way to remove data\nfrom a table while retaining its type information, and if that means\ndoing DELETE without triggers when the table is small, then it should.\n The only person who could thwart me is someone who badly wants their\n128K table to be exactly 8 or 0K, which seems unlikely given the 5MB\nof catalog anyway.\n\nDoes that sound reasonable? As in, would anyone object if TRUNCATE\nlearned this behavior?\n\n-- \nfdr\n",
"msg_date": "Tue, 10 Jul 2012 22:22:27 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Daniel Farina <[email protected]> writes:\n> TRUNCATE should simply be very nearly the fastest way to remove data\n> from a table while retaining its type information, and if that means\n> doing DELETE without triggers when the table is small, then it should.\n> The only person who could thwart me is someone who badly wants their\n> 128K table to be exactly 8 or 0K, which seems unlikely given the 5MB\n> of catalog anyway.\n\n> Does that sound reasonable? As in, would anyone object if TRUNCATE\n> learned this behavior?\n\nYes, I will push back on that.\n\n(1) We don't need the extra complexity.\n\n(2) I don't believe that you know where the performance crossover point\nwould be (according to what metric, anyway?).\n\n(3) The performance of the truncation itself should not be viewed in\nisolation; subsequent behavior also needs to be considered. An example\nof possible degradation is that index bloat would no longer be\nguaranteed to be cleaned up over a series of repeated truncations.\n(You might argue that if the table is small then the indexes couldn't\nbe very bloated, but I don't think that holds up over a long series.)\n\nIOW, I think it's fine as-is. I'd certainly wish to see many more\nthan one complainant before we expend effort in this area.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Jul 2012 10:05:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Wed, Jul 11, 2012 at 10:05:48AM -0400, Tom Lane wrote:\n> Daniel Farina <[email protected]> writes:\n> > TRUNCATE should simply be very nearly the fastest way to remove data\n> > from a table while retaining its type information, and if that means\n> > doing DELETE without triggers when the table is small, then it should.\n> > The only person who could thwart me is someone who badly wants their\n> > 128K table to be exactly 8 or 0K, which seems unlikely given the 5MB\n> > of catalog anyway.\n> \n> > Does that sound reasonable? As in, would anyone object if TRUNCATE\n> > learned this behavior?\n> \n> Yes, I will push back on that.\n> \n> (1) We don't need the extra complexity.\n> \n> (2) I don't believe that you know where the performance crossover point\n> would be (according to what metric, anyway?).\n> \n> (3) The performance of the truncation itself should not be viewed in\n> isolation; subsequent behavior also needs to be considered. An example\n> of possible degradation is that index bloat would no longer be\n> guaranteed to be cleaned up over a series of repeated truncations.\n> (You might argue that if the table is small then the indexes couldn't\n> be very bloated, but I don't think that holds up over a long series.)\n> \n> IOW, I think it's fine as-is. I'd certainly wish to see many more\n> than one complainant before we expend effort in this area.\n> \n> \t\t\tregards, tom lane\n> \n\n+1 TRUNCATE needs to keep the same properties independent of the size\nof the table. Smearing it into a DELETE would not be good at all. If\nthere are optimizations that can be done to keep its current behavior,\nthose might be possible, but the complexity may not be worthwhile for\na relative corner case.\n\nRegards,\nKen\n",
"msg_date": "Wed, 11 Jul 2012 09:19:54 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Tom Lane wrote:\n> (3) The performance of the truncation itself should not be viewed in\n> isolation; subsequent behavior also needs to be considered. An example\n> of possible degradation is that index bloat would no longer be\n> guaranteed to be cleaned up over a series of repeated truncations.\n> (You might argue that if the table is small then the indexes couldn't\n> be very bloated, but I don't think that holds up over a long series.)\n>\n> IOW, I think it's fine as-is. I'd certainly wish to see many more\n> than one complainant before we expend effort in this area.\n\nI think a documentation change would be worthwhile.\n\nAt the moment the TRUNCATE page says, with no caveats, that it is faster than\nunqualified DELETE.\n\nIt surprised me to find that this wasn't true (with 7.2, again with small\ntables in a testsuite), and evidently it's still surprising people today.\n\n-M-\n",
"msg_date": "Wed, 11 Jul 2012 19:10:37 +0100",
"msg_from": "Matthew Woodcraft <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Wed, Jul 11, 2012 at 7:05 AM, Tom Lane <[email protected]> wrote:\n\n> Daniel Farina <[email protected]> writes:\n> > TRUNCATE should simply be very nearly the fastest way to remove data\n> > from a table while retaining its type information, and if that means\n> > doing DELETE without triggers when the table is small, then it should.\n> > The only person who could thwart me is someone who badly wants their\n> > 128K table to be exactly 8 or 0K, which seems unlikely given the 5MB\n> > of catalog anyway.\n>\n> > Does that sound reasonable? As in, would anyone object if TRUNCATE\n> > learned this behavior?\n>\n> Yes, I will push back on that.\n>\n> (1) We don't need the extra complexity.\n>\n> (2) I don't believe that you know where the performance crossover point\n> would be (according to what metric, anyway?).\n>\n> (3) The performance of the truncation itself should not be viewed in\n> isolation; subsequent behavior also needs to be considered. An example\n> of possible degradation is that index bloat would no longer be\n> guaranteed to be cleaned up over a series of repeated truncations.\n> (You might argue that if the table is small then the indexes couldn't\n> be very bloated, but I don't think that holds up over a long series.)\n>\n> IOW, I think it's fine as-is. I'd certainly wish to see many more\n> than one complainant before we expend effort in this area.\n>\n\nIt strikes me as a contrived case rather than a use case. What sort of app\nrepeatedly fills and truncates a small table thousands of times ... other\nthan a test app to see whether you can do it or not?\n\nThe main point of truncate is to provide a more efficient mechanism to\ndelete all data from large tables. If your app developers don't know within\na couple orders of magnitude how much data your tables hold, and can't\nfigure out whether to use delete or truncate, I can't find much sympathy in\nmy heart.\n\nCraig\n\nOn Wed, Jul 11, 2012 at 7:05 AM, Tom Lane <[email protected]> wrote:\nDaniel Farina <[email protected]> writes:\n> TRUNCATE should simply be very nearly the fastest way to remove data\n> from a table while retaining its type information, and if that means\n> doing DELETE without triggers when the table is small, then it should.\n> The only person who could thwart me is someone who badly wants their\n> 128K table to be exactly 8 or 0K, which seems unlikely given the 5MB\n> of catalog anyway.\n\n> Does that sound reasonable? As in, would anyone object if TRUNCATE\n> learned this behavior?\n\nYes, I will push back on that.\n\n(1) We don't need the extra complexity.\n\n(2) I don't believe that you know where the performance crossover point\nwould be (according to what metric, anyway?).\n\n(3) The performance of the truncation itself should not be viewed in\nisolation; subsequent behavior also needs to be considered. An example\nof possible degradation is that index bloat would no longer be\nguaranteed to be cleaned up over a series of repeated truncations.\n(You might argue that if the table is small then the indexes couldn't\nbe very bloated, but I don't think that holds up over a long series.)\n\nIOW, I think it's fine as-is. I'd certainly wish to see many more\nthan one complainant before we expend effort in this area. It strikes me as a contrived case rather than a use case. What sort of app repeatedly fills and truncates a small table thousands of times ... other than a test app to see whether you can do it or not?\nThe main point of truncate is to provide a more efficient mechanism to delete all data from large tables. If your app developers don't know within a couple orders of magnitude how much data your tables hold, and can't figure out whether to use delete or truncate, I can't find much sympathy in my heart.\nCraig",
"msg_date": "Wed, 11 Jul 2012 13:18:32 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On 07/11/2012 03:18 PM, Craig James wrote:\n\n> It strikes me as a contrived case rather than a use case. What sort of\n> app repeatedly fills and truncates a small table thousands of times ...\n> other than a test app to see whether you can do it or not?\n\nTest systems. Any company with even a medium-size QA environment will \nhave continuous integration systems that run unit tests on a trash \ndatabase hundreds or thousands of times through the day. Aside from \ndropping/creating the database via template, which would be *really* \nslow, truncate is the easiest/fastest way to reset between tests.\n\nIf TRUNCATE suddenly started defaulting to DELETE on small table-sets \nand several iterations led to exponential index growth, that would be \nrather unfortunate.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Wed, 11 Jul 2012 15:47:21 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "\nOn 07/11/2012 04:47 PM, Shaun Thomas wrote:\n> On 07/11/2012 03:18 PM, Craig James wrote:\n>\n>> It strikes me as a contrived case rather than a use case. What sort of\n>> app repeatedly fills and truncates a small table thousands of times ...\n>> other than a test app to see whether you can do it or not?\n>\n> Test systems. Any company with even a medium-size QA environment will \n> have continuous integration systems that run unit tests on a trash \n> database hundreds or thousands of times through the day. Aside from \n> dropping/creating the database via template, which would be *really* \n> slow, truncate is the easiest/fastest way to reset between tests.\n\n\nWhy is recreating the test db from a (populated) template going to be \nslower than truncating all the tables and repopulating from an external \nsource? I had a client who achieved a major improvement in speed and \nreduction in load by moving to this method of test db setup.\n\ncheers\n\nandrew\n\n\n",
"msg_date": "Wed, 11 Jul 2012 17:04:39 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On 11/07/12 21:18, Craig James wrote:\n>\n> It strikes me as a contrived case rather than a use case. What sort \n> of app repeatedly fills and truncates a small table thousands of times \n> ... other than a test app to see whether you can do it or not?\nIf I have a lot of data which updates/inserts an existing table but I \ndon't know if a given record will be an update or an insert, then I \nwrite all the 'new' data to a temporary table and then use sql \nstatements to achieve the updates and inserts on the existing table.\n\nIs there a better way of doing this in standard SQL?\n\nMark\n\n\n",
"msg_date": "Wed, 11 Jul 2012 22:32:33 +0100",
"msg_from": "Mark Thornton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Wed, Jul 11, 2012 at 2:32 PM, Mark Thornton <[email protected]> wrote:\n\n> On 11/07/12 21:18, Craig James wrote:\n>\n>>\n>> It strikes me as a contrived case rather than a use case. What sort of\n>> app repeatedly fills and truncates a small table thousands of times ...\n>> other than a test app to see whether you can do it or not?\n>>\n> If I have a lot of data which updates/inserts an existing table but I\n> don't know if a given record will be an update or an insert, then I write\n> all the 'new' data to a temporary table and then use sql statements to\n> achieve the updates and inserts on the existing table.\n>\n> Is there a better way of doing this in standard SQL?\n>\n\nIf it's a single session, use a temporary table. It is faster to start\nwith (temp tables aren't logged), and it's automatically dropped at the end\nof the session (or at the end of the transaction if that's what you\nspecified when you created it). This doesn't work if your insert/update\nspans more than one session.\n\nAnother trick that works (depending on how big your tables are) is to scan\nthe primary key before you start, and build a hash table of the keys. That\ninstantly tells you whether each record should be an insert or update.\n\nCraig\n\n\n>\n> Mark\n>\n>\n>\n\nOn Wed, Jul 11, 2012 at 2:32 PM, Mark Thornton <[email protected]> wrote:\nOn 11/07/12 21:18, Craig James wrote:\n\n\nIt strikes me as a contrived case rather than a use case. What sort of app repeatedly fills and truncates a small table thousands of times ... other than a test app to see whether you can do it or not?\n\nIf I have a lot of data which updates/inserts an existing table but I don't know if a given record will be an update or an insert, then I write all the 'new' data to a temporary table and then use sql statements to achieve the updates and inserts on the existing table.\n\nIs there a better way of doing this in standard SQL?If it's a single session, use a temporary table. It is faster to start with (temp tables aren't logged), and it's automatically dropped at the end of the session (or at the end of the transaction if that's what you specified when you created it). This doesn't work if your insert/update spans more than one session.\nAnother trick that works (depending on how big your tables are) is to scan the primary key before you start, and build a hash table of the keys. That instantly tells you whether each record should be an insert or update.\nCraig \n\nMark",
"msg_date": "Wed, 11 Jul 2012 15:09:56 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Wed, Jul 11, 2012 at 7:05 AM, Tom Lane <[email protected]> wrote:\n> Daniel Farina <[email protected]> writes:\n>> TRUNCATE should simply be very nearly the fastest way to remove data\n>> from a table while retaining its type information, and if that means\n>> doing DELETE without triggers when the table is small, then it should.\n>> The only person who could thwart me is someone who badly wants their\n>> 128K table to be exactly 8 or 0K, which seems unlikely given the 5MB\n>> of catalog anyway.\n>\n>> Does that sound reasonable? As in, would anyone object if TRUNCATE\n>> learned this behavior?\n>\n> Yes, I will push back on that.\n>\n> (1) We don't need the extra complexity.\n\nWell, a \"need\" is justified by the gains, no? It seems like this\nfollows from the thoughts presented afterwards, so I'll discuss those.\n\n> (2) I don't believe that you know where the performance crossover point\n> would be (according to what metric, anyway?).\n\nNope. I don't. But an exact crossover is a level of precision I don't\nreally need, because here are where things stand on a completely\nunremarkable test suite on the closest project to me that meets the\n\"regular web-app\" profile case:\n\nWith en-masse DELETE:\nrake 41.89s user 3.08s system 76% cpu 58.629 total\n\nWith TRUNCATE:\nrake 49.86s user 2.93s system 5% cpu 15:17.88 total\n\n15x slower. This is a Macbook Air with full disk encryption and SSD\ndisk with fsync off, e.g. a very typical developer configuration.\nThis is a rather small schema -- probably a half a dozen tables, and\nprobably about a dozen indexes. This application is entirely\nunremarkable in its test-database workload: it wants to load a few\nrecords, do a few things, and then clear those handful of records.\n\n> (3) The performance of the truncation itself should not be viewed in\n> isolation; subsequent behavior also needs to be considered. An example\n> of possible degradation is that index bloat would no longer be\n> guaranteed to be cleaned up over a series of repeated truncations.\n> (You might argue that if the table is small then the indexes couldn't\n> be very bloated, but I don't think that holds up over a long series.)\n\nI'm not entirely convinced to the mechanism, it was simply the most\nobvious one, but I bet a one that is better in every respect is also\npossible. It did occur to me that bloat might be a sticky point.\n\n> IOW, I think it's fine as-is. I'd certainly wish to see many more\n> than one complainant before we expend effort in this area.\n\nI've seen way more than one complaint, and I'm quite sure there are\nthousands of man hours (or more) spent on people who don't even know\nto complain about such atrocious performance (or maybe it's so bad\nthat most people run a web search and find out, probably being left\nreally annoyed from having to yak shave as a result). In spite of how\nfamiliar I am with Postgres and its mailing lists, I have glossed over\nthis for a long time, just thinking \"wow, that really sucks\" and only\nnow -- by serendipity of having skimmed this post -- have seen fit to\ncomplain on behalf of quite a few rounds of dispensing workaround\nadvice to other people. It's only when this was brought to the fore\nof my mind did I stop to consider how much wasted time I've seen in\npeople trying to figure this out over and over again (granted, they\ntend to remember after the first time).\n\nPerhaps a doc fix is all we need (TRUNCATE is constant-time on large\ntables, but can be very slow compared to DELETE on small tables), but\nI completely and enthusiastically reject any notion from people\ncalling this \"contrived\" or an \"edge case,\" because people writing\nsoftware against PostgreSQL that have unit tests have this use case\nconstantly, often dozens or even hundreds of times a day.\n\nWhat I don't know is how many people figure out that they should use\nDELETE instead, and after how long. Even though the teams I work with\nare very familiar with many of the finer points of Postgres, doing\nsome probing for the first time took a little while.\n\nIf we're going to live with it, I contest that we should own it as a\nreal and substantial weakness for development productivity, and not\nsweep it under the rug as some \"contrived\" or \"corner\" case.\n\n-- \nfdr\n",
"msg_date": "Wed, 11 Jul 2012 15:51:40 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On 07/12/2012 02:10 AM, Matthew Woodcraft wrote:\n> I think a documentation change would be worthwhile. At the moment the \n> TRUNCATE page says, with no caveats, that it is faster than \n> unqualified DELETE.\n\n+1 to updating the docs to reflect the fact that TRUNCATE may have a \nhigher fixed cost than DELETE FROM table; but also prevents bloat.\n\nIt's a weird little corner case, but with database-backed unit testing \nit's going to become a more significant one whether or not it feels like \nit makes any sense.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 12 Jul 2012 09:23:16 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On 07/11/2012 01:22 PM, Daniel Farina wrote:\n> On Tue, Jul 10, 2012 at 5:37 PM, Craig Ringer <[email protected]> wrote:\n>> Hi\n>>\n>> After seeing a few discussions here and on Stack Overflow I've put together\n>> a quick explanation of why \"DELETE FROM table;\" may be faster than \"TRUNCATE\n>> table\" for people doing unit testing on lots of tiny tables, people who're\n>> doing this so often they care how long it takes.\n>>\n>> I'd love it if a few folks who know the guts were to take a look and verify\n>> its correctness:\n> I haven't said this before, but think it every time someone asks me\n> about this, so I'll say it now:\n>\n> This is a papercut that should be solved with improved mechanics.\n> TRUNCATE should simply be very nearly the fastest way to remove data\n> from a table while retaining its type information, and if that means\n> doing DELETE without triggers when the table is small, then it should.\n> The only person who could thwart me is someone who badly wants their\n> 128K table to be exactly 8 or 0K, which seems unlikely given the 5MB\n> of catalog anyway.\n>\n> Does that sound reasonable? As in, would anyone object if TRUNCATE\n> learned this behavior?\nYep, I'd object. It's more complicated and less predictable. Also, as I \nstrongly and repeatedly highlighted in my post, DELETE FROM table; does \na different job to TRUNCATE. You'd at minimum need the effect of DELETE \nfollowed by a VACUUM on the table and its indexes to be acceptable and \navoid the risk of rapid table + index bloat - and that'd be lots slower \nthan a TRUNCATE. You could be clever and lock the table then DELETE and \nset xmax at the same time I guess, but I suspect that'd be a bit of work \nand still wouldn't take care of the indexes.\n\nIt's also too complicated, not least because AFAIK util commands and \nCRUD commands go through very different paths in PostgreSQL.\n\nI guess you could propose and post a prototype patch for a new command \nthat tried to empty the table via whatever method it thought would be \nfastest. Such a new command wouldn't be bound by the accepted and \nexpected rules followed by TRUNCATE so it could vary its behaviour based \non the table, doing a real truncate on big tables and a \ndelete-then-vaccum on small tables. I suspect you'd land up writing the \nfairly complicated code for the potentially multi-table \ndelete-and-vaccum yourself.\n\nHonestly, though, it might be much better to start with \"how can \nTRUNCATE of empty or near-empty tables be made faster?\" and start \nexamining where the time goes.\n\n--\nCraig Ringer\n\n",
"msg_date": "Thu, 12 Jul 2012 09:26:14 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On 07/12/2012 06:51 AM, Daniel Farina wrote:\n> 15x slower. This is a Macbook Air with full disk encryption and SSD\n> disk with fsync off, e.g. a very typical developer configuration.\nDon't use full disk encryption for throwaway test data if you care about \nhow long those tests take. It's a lot like tuning the engine in your car \nwhile ignoring the fact that the handbrake is jammed on and you're \ndragging a parachute. Use a ramdisk or un-encrypted partition, something \nthat doesn't take three weeks to fsync().\n\n\nThat said, this performance gap makes me wonder if TRUNCATE is forcing \nmetadata synchronisation even with fsync=off, causing the incredibly \nglacially awesomely slow disk access of your average FDE system to kick \nin, possibly even once per table or even once per file (index, table, \ntoast, etc). If so, it may be worth:\n\n- Allowing TRUNCATE to skip synchronization when fsync=off. Pg is \nalready allowed to eat all your data if it feels like it in this \nconfiguration, so there's no point flushing filesystem metadata to make \nsure files are really swapped.\n\n- When fsync=on, trying to flush all changes to all files out at once \nrather than once per file as it could be doing (haven't checked) right \nnow. How to do this without also flushing all other pending I/O on the \nwhole system (with a global \"sync()\") would be somewhat OS/filesystem \ndependent, unfortunately.\n\nYou could help progress this issue constructively by doing some \nprofiling on your system, tracing Pg's system calls, and determining \nwhat exactly it's doing with DELETE vs TRUNCATE and where the time goes. \nOn Linux you'd use OProfile for this and on Solaris you'd use DTrace. \nDunno what facilities Mac OS X has but there must be something similar.\n\nOnce you've determined why it's slow, you have a useful starting point \nfor making it faster, first for test systems with fsync=off then, once \nthat's tracked down, maybe for robust systems with fsync=on.\n\n> I've seen way more than one complaint, and I'm quite sure there are\n> thousands of man hours (or more) spent on people who don't even know\n> to complain about such atrocious performance (or maybe it's so bad\n> that most people run a web search and find out, probably being left\n> really annoyed from having to yak shave as a result).\nI suspect you're right - as DB based unit testing becomes more \ncommonplace this is turning up a lot more. As DB unit tests were first \nreally popular in the ruby/rails crowd they've probably seen the most \npain, but as someone who doesn't move in those circles I wouldn't have \nknown. They certainly don't seem to have been making noise about it \nhere, and I've only recently seen some SO questions about it.\n\n> Perhaps a doc fix is all we need (TRUNCATE is constant-time on large\n> tables, but can be very slow compared to DELETE on small tables), but\n> I completely and enthusiastically reject any notion from people\n> calling this \"contrived\" or an \"edge case,\" because people writing\n> software against PostgreSQL that have unit tests have this use case\n> constantly, often dozens or even hundreds of times a day.\nI have to agree with this - it may have been an edge case in the past, \nbut it's becoming mainstream and is worth being aware of.\n\nThat said, the group of people who care about this most are not well \nrepresented as active contributors to PostgreSQL. I'd love it if you \ncould help start to change that by stepping in and taking a little time \nto profile exactly what's going on with your system so we can learn \nwhat, exactly, is slow.\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Thu, 12 Jul 2012 09:41:54 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Wed, Jul 11, 2012 at 6:41 PM, Craig Ringer <[email protected]> wrote:\n> On 07/12/2012 06:51 AM, Daniel Farina wrote:\n>>\n>> 15x slower. This is a Macbook Air with full disk encryption and SSD\n>> disk with fsync off, e.g. a very typical developer configuration.\n>\n> Don't use full disk encryption for throwaway test data if you care about how\n> long those tests take. It's a lot like tuning the engine in your car while\n> ignoring the fact that the handbrake is jammed on and you're dragging a\n> parachute. Use a ramdisk or un-encrypted partition, something that doesn't\n> take three weeks to fsync().\n\nNo. Full disk encryption is not that slow. And as we see, there is a\nworkaround that works \"just fine\" (maybe it could be faster, who\nknows?) in this exact configuration. The greater problem is more\nlikely to be HFS+, the file system.\n\nIf someone produces and gets adoption of a wonderfully packaged\ntest-configurations of Postgres using a ram-based block device that\nsomehow have a good user experience living alongside the persistent\nversion, this problem can go away completely. In fact, that would be\n*phenomenal*, because so many things could be so much faster. But\nthat's surprisingly challenging: for example, last I checked,\nPostgres.app, principally written by one of my colleagues, does *not*\ndisable fsync because we don't know of a great way to communicate the\nrelaxed expectations of durability, even though Postgres.app is\ntargeted towards developers: for example, it does not run until you\nlog in, so it's more like a foreground application. Maybe if the\nconnection had an option that said \"x-test=true\", or\nsomething...deposit your idea here.\n\nUntil then, this is an at the level of an is-ought problem: there is\nno immediate to even moderately distant future where people are not\ngoing to click the full disk encryption button their OS vendor gives\nthem (nor should they *not* click that: people love to download bits\nof data from production to their local machine to figure out problems,\nand I think the world is a better place for it), and people are going\nto use HFS+ in large numbers, so talking about how many people \"just\"\nought to reconfigure is tantamount to blaming the victim, especially\nwhen we have a sound and workable workaround in hand to at least prove\ndefinitively that the problem is not intractable.\n\n> That said, this performance gap makes me wonder if TRUNCATE is forcing\n> metadata synchronisation even with fsync=off, causing the incredibly\n> glacially awesomely slow disk access of your average FDE system to kick in,\n> possibly even once per table or even once per file (index, table, toast,\n> etc).\n\nLousy file system is my guess. HFS is not that great. I bet ext3\nwould be a reasonable model of this amount of pain as well.\n\n> You could help progress this issue constructively by doing some profiling on\n> your system, tracing Pg's system calls, and determining what exactly it's\n> doing with DELETE vs TRUNCATE and where the time goes. On Linux you'd use\n> OProfile for this and on Solaris you'd use DTrace. Dunno what facilities Mac\n> OS X has but there must be something similar.\n\nI'm sure I could, but first I want to put to complete rest the notion\nthat this is an \"edge case.\" It's only an edge case if the only\ndatabase you have runs in production. An understanding by more people\nthat this is a problem of at least moderate impact is a good first\nstep. I'll ask some of my more Macintosh-adept colleagues for advice.\n\n>> I've seen way more than one complaint, and I'm quite sure there are\n>> thousands of man hours (or more) spent on people who don't even know\n>> to complain about such atrocious performance (or maybe it's so bad\n>> that most people run a web search and find out, probably being left\n>> really annoyed from having to yak shave as a result).\n>\n> I suspect you're right - as DB based unit testing becomes more commonplace\n> this is turning up a lot more. As DB unit tests were first really popular in\n> the ruby/rails crowd they've probably seen the most pain, but as someone who\n> doesn't move in those circles I wouldn't have known. They certainly don't\n> seem to have been making noise about it here, and I've only recently seen\n> some SO questions about it.\n\nWell, here's another anecdotal data point to show how this can sneak\nunder the radar: because this was a topic of discussion in the office\ntoday, a colleague in the Department of Data discovered his 1.5 minute\ntesting cycle could be cut to thirty seconds. We conservatively\nestimate he runs the tests 30 times a day when working on his project,\nand probably more. Multiply that over a few weeks (not even counting\nthe cost of more broken concentration) and we're talking a real loss\nof productivity and satisfaction.\n\nHere's an example of a person that works on a Postgres-oriented\nproject at his day job, has multi-year experience with it, and can\nwrite detailed articles like these:\nhttps://devcenter.heroku.com/articles/postgresql-concurrency . If he\ndidn't know to get this right without having it called out as a\ncaveat, what number of people have but the most slim chance? Our best\nasset is probably the relative obscurity of TRUNCATE vs. DELETE for\nthose who are less familiar with the system.\n\nI'm sure he would have found it eventually when starting to profile\nhis tests when they hit the 3-4 minute mark, although he might just as\neasily said \"well, TRUNCATE, that's the fast one...nothing to do\nthere...\".\n\n> That said, the group of people who care about this most are not well\n> represented as active contributors to PostgreSQL. I'd love it if you could\n> help start to change that by stepping in and taking a little time to profile\n> exactly what's going on with your system so we can learn what, exactly, is\n> slow.\n\nIt's not my platform of choice, per se, but on my Ubuntu Precise on\next4 with fsync off and no disk encryption:\n\n$ rake\n55.37user 2.36system 1:15.33elapsed 76%CPU (0avgtext+0avgdata\n543120maxresident)k\n0inputs+2728outputs (0major+85691minor)pagefaults 0swaps\n\n$ rake\n53.85user 1.97system 2:04.38elapsed 44%CPU (0avgtext+0avgdata\n547904maxresident)k\n0inputs+2640outputs (0major+100226minor)pagefaults 0swaps\n\nWhich is a not-as-pathetic slowdown, but still pretty substantial,\nbeing somewhat shy of 2x. I'll ask around for someone who is\nMacintosh-OS-inclined (not as a user, but as a developer) about a good\nway to get a profile.\n\n-- \nfdr\n",
"msg_date": "Wed, 11 Jul 2012 23:12:35 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On 07/12/2012 02:12 PM, Daniel Farina wrote:\n> On Wed, Jul 11, 2012 at 6:41 PM, Craig Ringer <[email protected]> wrote:\n>> On 07/12/2012 06:51 AM, Daniel Farina wrote:\n>>> 15x slower. This is a Macbook Air with full disk encryption and SSD\n>>> disk with fsync off, e.g. a very typical developer configuration.\n>> Don't use full disk encryption for throwaway test data if you care about how\n>> long those tests take. It's a lot like tuning the engine in your car while\n>> ignoring the fact that the handbrake is jammed on and you're dragging a\n>> parachute. Use a ramdisk or un-encrypted partition, something that doesn't\n>> take three weeks to fsync().\n> No. Full disk encryption is not that slow. And as we see, there is a\n> workaround that works \"just fine\" (maybe it could be faster, who\n> knows?) in this exact configuration. The greater problem is more\n> likely to be HFS+, the file system.\n\nThe two are somewhat hand in hand in any case.\n\n\"Three weeks\" is of course hyperbole. Nonetheless, I haven't seen a full \ndisk encryption system that doesn't dramatically slow down synchronous \noperations by forcing a lot more work to be done than would be the case \nwithout disk encryption. Perhaps the Mac OS X / HFS+ solution is an \nexception to this, but I doubt it.\n\nGiven a small program that repeats the following sequence:\n\n- Creates a file\n- Writes few bytes to it\n- fsync()s and closes it\n- deletes it\n- fsync()s the directory to ensure the metadata change is flushed\n\n... and times it, it'd be interesting to do test runs with and without \nencryption on HFS+.\n\n\n> But\n> that's surprisingly challenging: for example, last I checked,\n> Postgres.app, principally written by one of my colleagues, does *not*\n> disable fsync because we don't know of a great way to communicate the\n> relaxed expectations of durability, even though Postgres.app is\n> targeted towards developers\n\nI think this is an issue of developer and user responsibility. Proper \ntest/dev separation from production, and a bit of thought, is all it \ntakes. After all, Pg can't stop you running your unit tests (full of all \nthose slow TRUNCATEs) against your production database, either. \nDurability isn't worth a damn if you just deleted all your data.\n\nAbout the only technical aid I can see for this would be some kind of \nGUC that the app could proactively check against. Set it to \"production\" \nfor your production DB, and \"test\" for your throwaways. If the unit \ntests see \"production\" they refuse to run; if the app proper sees \"test\" \nit warns about data durability. Have it default to unset or \"test\" so \nadmins must explicitly set it to \"production\".\n\nHandily, this is already possible. You can add whatever custom GUCs you \nwant. If you want to make your unit tests require that a GUC called \n\"stage.is_production\" be off in order to run, just add to postgresql.conf:\n\n custom_variable_classes = 'stage'\n stage.is_production = off\n\nnow, you can see the new GUC:\n\nregress=# SHOW stage.is_production;\n stage.is_production\n---------------------\n off\n(1 row)\n\n... so your unit tests and app can check for it. Since you're producing \ncustom installers, this is something you can bundle as part of the \ngenerated postgresql.conf for easy differentiation between test and \nproduction DBs.\n\nIf requirements like this were integrated into common unit testing \nframeworks some of these worries would go away. That's not something Pg \ncane make happen, though.\n\nHow would you want to see it work? How would you solve this problem?\n\n> Until then, this is an at the level of an is-ought problem: there is\n> no immediate to even moderately distant future where people are not\n> going to click the full disk encryption button their OS vendor gives\n> them (nor should they *not* click that: people love to download bits\n> of data from production to their local machine to figure out problems,\n> and I think the world is a better place for it), and people are going\n> to use HFS+ in large numbers, so talking about how many people \"just\"\n> ought to reconfigure is tantamount to blaming the victim, especially\n> when we have a sound and workable workaround in hand to at least prove\n> definitively that the problem is not intractable.\n\nYes, people do work on production data in test envs, and FDE is overall \na plus. I'd rather they not turn it off - and rather they not have to. \nThat's why I suggested using a ramdisk as an alternative; it's \ncompletely non-durable and just gets tossed out, so there's no more \nworry about data leakage than there is for access to the disk cache \nbuffered in RAM or the mounted disks of a FDE machine when it's unlocked.\n\nSetting up Pg to run off a ramdisk isn't a one-click trivial operation, \nand it sounds like the group you're mainly interested in are the \ndatabase-as-a-utility crowd that prefer not to see, think about, or \ntouch the database directly, hence Postgres.app etc. If so this is much \nmore of a packaging problem than a core Pg problem. I take your point \nabout needing to be able to indicate lack of durability to clients, but \nthink it's relatively easily done with a custom GUC as shown above.\n\nOf course, Pg on a ramdisk has other issues that quickly become apparent \nwhen you \"COPY\" that 2GB CSV file into your DB...\n\n> Lousy file system is my guess. HFS is not that great. I bet ext3 would\n> be a reasonable model of this amount of pain as well.\n\nHey, HFS+ Journaled/Extended, which is all that you're ever likely to \nsee, is merely bad :-P\n\nThe original HFS, now that was a monster. Not-so-fond memories of \nregular Norton tools defrag runs resurfacing from my Mac OS 7 days...\n\n> I'm sure I could, but first I want to put to complete rest the notion\n> that this is an \"edge case.\" It's only an edge case if the only\n> database you have runs in production. An understanding by more people\n> that this is a problem of at least moderate impact is a good first\n> step. I'll ask some of my more Macintosh-adept colleagues for advice.\n\nThat'd be great; as this is an issue having real world impact, people \nwith mac equipment and knowledge need to get involved in helping to \nsolve it. It's not confined to mac, but seems to be worse there.\n\nThe other way you could help would be by providing canned self-contained \ntest cases that can be used to demonstrate the big performance gaps \nyou're reporting and test them on other platforms / OSes / file systems. \nSomething with a \"I've never used Ruby\" quickstart.\n\n> Here's an example of a person that works on a Postgres-oriented\n> project at his day job, has multi-year experience with it, and can\n> write detailed articles like these:\n> https://devcenter.heroku.com/articles/postgresql-concurrency . If he\n> didn't know to get this right without having it called out as a\n> caveat, what number of people have but the most slim chance? Our best\n> asset is probably the relative obscurity of TRUNCATE vs. DELETE for\n> those who are less familiar with the system.\n\nYep. This whole issue was new to me until last week too. I run tests \nagainst my DB but it's fast enough here. In any case, for my tests other \ncosts are greatly more significant than a few fractions of a second \ndifference in one DB operation. Clearly that's not the case for some DB \nunit testing designs.\n\nOther than ruby/rails/rake, what other systems are you aware of that're \naffected by these issues? I'm not dismissing ruby, I just want to know \nif you know of other groups or techs that're ALSO affected.\n\n> Which is a not-as-pathetic slowdown, but still pretty substantial,\n> being somewhat shy of 2x. I'll ask around for someone who is\n> Macintosh-OS-inclined (not as a user, but as a developer) about a good\n> way to get a profile.\n\nThat'd be great. Get them onto the list and involved, because if you \nwant to see this improved it's going to take some back and forth and \nsomeone who can interpret the profile results, test changes, etc.\n\nI only have a limited ability and willingness to drive this forward; I \nhave to focus on other things. You'll need to be willing to be proactive \nand push this a bit. Figuring out what part of truncation is taking the \ntime would be a big plus, as would determining how much worse FDE makes \nit vs an unencrypted disk.\n\nHopefully others are interested and following along too.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 12 Jul 2012 15:45:53 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Wed, Jul 11, 2012 at 3:51 PM, Daniel Farina <[email protected]> wrote:\n>\n> Nope. I don't. But an exact crossover is a level of precision I don't\n> really need, because here are where things stand on a completely\n> unremarkable test suite on the closest project to me that meets the\n> \"regular web-app\" profile case:\n>\n> With en-masse DELETE:\n> rake 41.89s user 3.08s system 76% cpu 58.629 total\n>\n> With TRUNCATE:\n> rake 49.86s user 2.93s system 5% cpu 15:17.88 total\n>\n> 15x slower. This is a Macbook Air with full disk encryption and SSD\n> disk with fsync off, e.g. a very typical developer configuration.\n\nWhat is shared_buffers?\n\n> This is a rather small schema -- probably a half a dozen tables, and\n> probably about a dozen indexes. This application is entirely\n> unremarkable in its test-database workload: it wants to load a few\n> records, do a few things, and then clear those handful of records.\n\nHow many rounds of truncation does one rake do? I.e. how many\ntruncations are occurring over the course of that 1 minute or 15\nminutes?\n\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 12 Jul 2012 12:15:15 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Hi, \n\nI work with Daniel Farina and was the other engineer who \"discovered\" this, once again. That is, I got bit by it and have been running TRUNCATE on my test suites for years. \n\n\nOn Thursday, July 12, 2012 at 12:15 PM, Jeff Janes wrote:\n\n> On Wed, Jul 11, 2012 at 3:51 PM, Daniel Farina <[email protected] (mailto:[email protected])> wrote:\n> > \n> > Nope. I don't. But an exact crossover is a level of precision I don't\n> > really need, because here are where things stand on a completely\n> > unremarkable test suite on the closest project to me that meets the\n> > \"regular web-app\" profile case:\n> > \n> > With en-masse DELETE:\n> > rake 41.89s user 3.08s system 76% cpu 58.629 total\n> > \n> > With TRUNCATE:\n> > rake 49.86s user 2.93s system 5% cpu 15:17.88 total\n> > \n> > 15x slower. This is a Macbook Air with full disk encryption and SSD\n> > disk with fsync off, e.g. a very typical developer configuration.\n> > \n> \n> \n> What is shared_buffers?\n\n1600kB\n\nNot sure this will make much difference with such small data, but of course I could be dead wrong here.\n> \n> > This is a rather small schema -- probably a half a dozen tables, and\n> > probably about a dozen indexes. This application is entirely\n> > unremarkable in its test-database workload: it wants to load a few\n> > records, do a few things, and then clear those handful of records.\n> > \n> \n> \n> How many rounds of truncation does one rake do? I.e. how many\n> truncations are occurring over the course of that 1 minute or 15\n> minutes?\n> \n> \n\n\nAll tables are cleared out after every test. On this particular project, I'm running 200+ tests in 1.5 minutes (or 30 seconds with DELETE instead of TRUNCATE). For another, bigger project it's running 1700+ tests in about a minute. You can do the math from there.\n\nI'd say this is not atypical at all, so I too encourage teaching TRUNCATE about small tables and optimizing for that, as well as a section in the docs about postgres tweaks for test suites. I'm sure many people have done independent research in this area, and it'd be great to have it documented in one place.\n\n-Harold \n> \n> \n> Cheers,\n> \n> Jeff\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected] (mailto:[email protected]))\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n\n\nHi,\n I work with Daniel Farina and was the other engineer who \"discovered\" this, once again. That is, I got bit by it and have been running TRUNCATE on my test suites for years.\n\nOn Thursday, July 12, 2012 at 12:15 PM, Jeff Janes wrote:\n\nOn Wed, Jul 11, 2012 at 3:51 PM, Daniel Farina <[email protected]> wrote:Nope. I don't. But an exact crossover is a level of precision I don'treally need, because here are where things stand on a completelyunremarkable test suite on the closest project to me that meets the\"regular web-app\" profile case:With en-masse DELETE:rake 41.89s user 3.08s system 76% cpu 58.629 totalWith TRUNCATE:rake 49.86s user 2.93s system 5% cpu 15:17.88 total15x slower. This is a Macbook Air with full disk encryption and SSDdisk with fsync off, e.g. a very typical developer configuration.What is shared_buffers?1600kBNot sure this will make much difference with such small data, but of course I could be dead wrong here.This is a rather small schema -- probably a half a dozen tables, andprobably about a dozen indexes. This application is entirelyunremarkable in its test-database workload: it wants to load a fewrecords, do a few things, and then clear those handful of records.How many rounds of truncation does one rake do? I.e. how manytruncations are occurring over the course of that 1 minute or 15minutes?All tables are cleared out after every test. On this particular project, I'm running 200+ tests in 1.5 minutes (or 30 seconds with DELETE instead of TRUNCATE). For another, bigger project it's running 1700+ tests in about a minute. You can do the math from there.I'd say this is not atypical at all, so I too encourage teaching TRUNCATE about small tables and optimizing for that, as well as a section in the docs about postgres tweaks for test suites. I'm sure many people have done independent research in this area, and it'd be great to have it documented in one place.-Harold Cheers,Jeff-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 12 Jul 2012 16:21:13 -0700",
"msg_from": "\"=?utf-8?Q?Harold_A._Gim=C3=A9nez?=\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Thu, Jul 12, 2012 at 4:21 PM, Harold A. Giménez\n<[email protected]> wrote:\n>\n> > What is shared_buffers?\n>\n>\n> 1600kB\n\nThat is really small, so the buffer flushing should not be a problem.\nUnless you mean 1600MB.\n\n\n> > > This is a rather small schema -- probably a half a dozen tables, and\n> > > probably about a dozen indexes. This application is entirely\n> > > unremarkable in its test-database workload: it wants to load a few\n> > > records, do a few things, and then clear those handful of records.\n> >\n> > How many rounds of truncation does one rake do? I.e. how many\n> > truncations are occurring over the course of that 1 minute or 15\n> > minutes?\n>\n> All tables are cleared out after every test. On this particular project, I'm\n> running 200+ tests in 1.5 minutes (or 30 seconds with DELETE instead of\n> TRUNCATE). For another, bigger project it's running 1700+ tests in about a\n> minute. You can do the math from there.\n\nso 1700 rounds * 18 relations = truncates 30,600 per minute.\n\nThat is actually faster than I get truncates to go when I am purely\nlimited by CPU.\n\nI think the problem is in the Fsync Absorption queue. Every truncate\nadds a FORGET_RELATION_FSYNC to the queue, and processing each one of\nthose leads to sequential scanning the checkpointer's pending ops hash\ntable, which is quite large. It is almost entirely full of other\nrequests which have already been canceled, but it still has to dig\nthrough them all. So this is essentially an N^2 operation.\n\nI'm not sure why we don't just delete the entry instead of marking it\nas cancelled. It looks like the only problem is that you can't delete\nan entry other than the one just returned by hash_seq_search. Which\nwould be fine, as that is the entry that we would want to delete;\nexcept that mdsync might have a different hash_seq_search open, and so\nit wouldn't be safe to delete.\n\nIf the segno was taken out of the hash key and handled some other way,\nthen the forgetting could be done with a simple hash look up rather\nthan a full scan.\n\nMaybe we could just turn off the pending ops table altogether when\nfsync=off, but since fsync is PGC_SIGHUP it is not clear how you could\nsafely turn it back on.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 12 Jul 2012 18:00:49 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "I've moved this thread from performance to hackers.\n\nThe topic was poor performance when truncating lots of small tables\nrepeatedly on test environments with fsync=off.\n\nOn Thu, Jul 12, 2012 at 6:00 PM, Jeff Janes <[email protected]> wrote:\n\n> I think the problem is in the Fsync Absorption queue. Every truncate\n> adds a FORGET_RELATION_FSYNC to the queue, and processing each one of\n> those leads to sequential scanning the checkpointer's pending ops hash\n> table, which is quite large. It is almost entirely full of other\n> requests which have already been canceled, but it still has to dig\n> through them all. So this is essentially an N^2 operation.\n\nMy attached Proof of Concept patch reduces the run time of the\nbenchmark at the end of this message from 650sec to 84sec,\ndemonstrating that this is in fact the problem. Which doesn't mean\nthat my patch is the right answer to it, of course.\n\n(The delete option is still faster than truncate, coming in at around 55sec)\n\n\n> I'm not sure why we don't just delete the entry instead of marking it\n> as cancelled. It looks like the only problem is that you can't delete\n> an entry other than the one just returned by hash_seq_search. Which\n> would be fine, as that is the entry that we would want to delete;\n> except that mdsync might have a different hash_seq_search open, and so\n> it wouldn't be safe to delete.\n>\n> If the segno was taken out of the hash key and handled some other way,\n> then the forgetting could be done with a simple hash look up rather\n> than a full scan.\n\nThe above two ideas might be the better solution, as they would work\neven when fsync=on. Since BBU are becoming so popular I think the\nfsync queue could be a problem even with fsync on if the fsync is fast\nenough. But I don't immediately know how to implement them.\n\n> Maybe we could just turn off the pending ops table altogether when\n> fsync=off, but since fsync is PGC_SIGHUP it is not clear how you could\n> safely turn it back on.\n\nNow that I think about it, I don't see how turning fsync from off to\non can ever be known to be safe, until a system wide sync has\nintervened. After all a segment that was dirtied and added to the\npending ops table while fsync=off might also be removed from the\npending ops table the microsecond before fsync is turned on, so how is\nthat different from never adding it in the first place?\n\nThe attached Proof Of Concept patch implements this in two ways, one\nof which is commented out. The commented out way omits the overhead\nof sending the request to the checkpointer in the first place, but\nbreaks modularity a bit.\n\nThe benchmark used on 9.3devel head is:\n\nfsync=off, all other defaults.\n\n## one time initialization\nperl -le 'print \"create schema foo$_; create table foo$_.foo$_ (k\ninteger, v integer);\" $ARGV[0]..$ARGV[0]+$ARGV[1]-1' 0 10 |psql\n\n## actual benchmark.\nperl -le 'print \"set client_min_messages=warning;\";\n foreach (1..10000) {\n print \"BEGIN;\\n\";\n print \"insert into foo$_.foo$_ select * from\ngenerate_series(1,10); \" foreach $ARGV[0]..$ARGV[0]+$ARGV[1]-1;\n print \"COMMIT;\\nBEGIN;\\n\";\n print \"truncate table foo$_.foo$_; \" foreach\n$ARGV[0]..$ARGV[0]+$ARGV[1]-1;\n #print \"delete from foo$_.foo$_; \" foreach\n$ARGV[0]..$ARGV[0]+$ARGV[1]-1;\n print \"COMMIT;\\n\"\n } ' 0 10 | time psql > /dev/null\n\nCheers,\n\nJeff",
"msg_date": "Thu, 12 Jul 2012 21:55:22 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Thu, Jul 12, 2012 at 9:55 PM, Jeff Janes <[email protected]> wrote:\n> I've moved this thread from performance to hackers.\n>\n> The topic was poor performance when truncating lots of small tables\n> repeatedly on test environments with fsync=off.\n>\n> On Thu, Jul 12, 2012 at 6:00 PM, Jeff Janes <[email protected]> wrote:\n>\n>> I think the problem is in the Fsync Absorption queue. Every truncate\n>> adds a FORGET_RELATION_FSYNC to the queue, and processing each one of\n>> those leads to sequential scanning the checkpointer's pending ops hash\n>> table, which is quite large. It is almost entirely full of other\n>> requests which have already been canceled, but it still has to dig\n>> through them all. So this is essentially an N^2 operation.\n...\n>\n>> I'm not sure why we don't just delete the entry instead of marking it\n>> as cancelled. It looks like the only problem is that you can't delete\n>> an entry other than the one just returned by hash_seq_search. Which\n>> would be fine, as that is the entry that we would want to delete;\n>> except that mdsync might have a different hash_seq_search open, and so\n>> it wouldn't be safe to delete.\n\nThe attached patch addresses this problem by deleting the entry when\nit is safe to do so, and flagging it as canceled otherwise.\n\nI thought of using has_seq_scans to determine when it is safe, but\ndynahash.c does not make that function public, and I was afraid it\nmight be too slow, anyway.\n\nSo instead I used a static variable, plus the knowledge that the only\ntime there are two scans on the table is when mdsync starts one and\nthen calls RememberFsyncRequest indirectly. There is one other place\nthat does a seq scan, but there is no way for control to pass from\nthat loop to reach RememberFsyncRequest.\n\nI've added code to disclaim the scan if mdsync errors out. I don't\nthink that this should a problem because at that point the scan object\nis never going to be used again, so if its internal state gets screwed\nup it shouldn't matter. However, I wonder if it should also call\nhash_seq_term, otherwise the pending ops table will be permanently\nprevented from expanding (this is a pre-existing condition, not to do\nwith my patch). Since I don't know what can make mdsync error out\nwithout being catastrophic, I don't know how to test this out.\n\nOne concern is that if the ops table ever does become bloated, it can\nnever recover while under load. The bloated table will cause mdsync\nto take a long time to run, and as long as mdsync is in the call stack\nthe antibloat feature is defeated--so we have crossed a tipping point\nand cannot get back. I don't see that occurring in the current use\ncase, however. With my current benchmark, the anti-bloat is effective\nenough that mdsync never takes very long to execute, so a virtuous\ncircle exists.\n\nAs an aside, the comments in dynahash.c seem to suggest that one can\nalways delete the entry returned by hash_seq_search, regardless of the\nexistence of other sequential searches. I'm pretty sure that this is\nnot true. Also, shouldn't this contract about when one is allowed to\ndelete entries be in the hsearch.h file, rather than the dynahash.c\nfile?\n\nAlso, I still wonder if it is worth memorizing fsyncs (under\nfsync=off) that may or may not ever take place. Is there any\nguarantee that we can make by doing so, that couldn't be made\notherwise?\n\n\nCheers,\n\nJeff",
"msg_date": "Sat, 14 Jul 2012 17:10:18 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> On Thu, Jul 12, 2012 at 9:55 PM, Jeff Janes <[email protected]> wrote:\n>> The topic was poor performance when truncating lots of small tables\n>> repeatedly on test environments with fsync=off.\n>> \n>> On Thu, Jul 12, 2012 at 6:00 PM, Jeff Janes <[email protected]> wrote:\n>>> I think the problem is in the Fsync Absorption queue. Every truncate\n>>> adds a FORGET_RELATION_FSYNC to the queue, and processing each one of\n>>> those leads to sequential scanning the checkpointer's pending ops hash\n>>> table, which is quite large. It is almost entirely full of other\n>>> requests which have already been canceled, but it still has to dig\n>>> through them all. So this is essentially an N^2 operation.\n\n> The attached patch addresses this problem by deleting the entry when\n> it is safe to do so, and flagging it as canceled otherwise.\n\nI don't like this patch at all. It seems ugly and not terribly safe,\nand it won't help at all when the checkpointer is in the midst of an\nmdsync scan, which is a nontrivial part of its cycle.\n\nI think what we ought to do is bite the bullet and refactor the\nrepresentation of the pendingOps table. What I'm thinking about\nis reducing the hash key to just RelFileNodeBackend + ForkNumber,\nso that there's one hashtable entry per fork, and then storing a\nbitmap to indicate which segment numbers need to be sync'd. At\none gigabyte to the bit, I think we could expect the bitmap would\nnot get terribly large. We'd still have a \"cancel\" flag in each\nhash entry, but it'd apply to the whole relation fork not each\nsegment.\n\nIf we did this then the FORGET_RELATION_FSYNC code path could use\na hashtable lookup instead of having to traverse the table\nlinearly; and that would get rid of the O(N^2) performance issue.\nThe performance of FORGET_DATABASE_FSYNC might still suck, but\nDROP DATABASE is a pretty heavyweight operation anyhow.\n\nI'm willing to have a go at coding this design if it sounds sane.\nComments?\n\n> Also, I still wonder if it is worth memorizing fsyncs (under\n> fsync=off) that may or may not ever take place. Is there any\n> guarantee that we can make by doing so, that couldn't be made\n> otherwise?\n\nYeah, you have a point there. It's not real clear that switching fsync\nfrom off to on is an operation that we can make any guarantees about,\nshort of executing something like the code recently added to initdb\nto force-sync the entire PGDATA tree. Perhaps we should change fsync\nto be PGC_POSTMASTER (ie frozen at postmaster start), and then we could\nskip forwarding fsync requests when it's off?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Jul 2012 14:29:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "... btw, in the penny wise and pound foolish department, I observe that\nsmgrdounlink calls mdunlink separately for each possibly existing fork\nof a relation to be dropped. That means we are queuing a separate fsync\nqueue entry for each fork, and could immediately save a factor of four\nin FORGET_RELATION_FSYNC traffic if we were to redefine those queue\nentries as applying to all forks. The only reason to have a per-fork\nvariant, AFAICS, is for smgrdounlinkfork(), which is used nowhere and\nexists only because I was too chicken to remove the functionality\noutright in commit ece01aae479227d9836294b287d872c5a6146a11. But given\nthat we know the fsync queue can be a bottleneck, my vote is to refactor\nmdunlink to apply to all forks and send only one message.\n\nI am also wondering whether it's really necessary to send fsync request\nmessages for backend-local relations. If rnode.backend says it's local,\ncan't we skip sending the fsync request? All local relations are\nflush-on-crash anyway, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Jul 2012 18:37:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On 07/16/2012 02:29 AM, Tom Lane wrote:\n> Yeah, you have a point there. It's not real clear that switching fsync\n> from off to on is an operation that we can make any guarantees about,\n> short of executing something like the code recently added to initdb\n> to force-sync the entire PGDATA tree.\n\nThere's one way that doesn't have any housekeeping cost to Pg. It's \npretty bad manners if there's anybody other than Pg on the system though:\n\n sync()\n\nLet the OS do the housekeeping.\n\nIt's possible to do something similar on Windows, in that there are \nutilities for the purpose:\n\n http://technet.microsoft.com/en-us/sysinternals/bb897438.aspx\n\nThis probably uses:\n\n http://msdn.microsoft.com/en-us/library/s9xk9ehd%28VS.71%29.aspx\n\nfrom COMMODE.OBJ (unfortunate name), which has existed since win98.\n\n\n> Perhaps we should change fsync\n> to be PGC_POSTMASTER (ie frozen at postmaster start), and then we could\n> skip forwarding fsync requests when it's off?\n\nPersonally, I didn't even know it was runtime switchable.\n\nfsync=off is much less necessary with async commits, group commit via \ncommit delay, WAL improvements, etc. To me it's mostly of utility when \ntesting, particularly on SSDs. I don't see a DB restart requirement as a \nbig issue. It'd be interesting to see what -general has to say, if there \nare people depending on this.\n\nIf it's necessary to retain the ability to runtime switch it, making it \na somewhat rude sync() in exchange for boosted performance the rest of \nthe time may well be worthwhile anyway. It'd be interesting to see.\n\nAll this talk of synchronisation is making me really frustrated that \nthere seems to be very poor support in OSes for syncing a set of files \nin a single pass, potentially saving a lot of time and thrashing. A way \nto relax the ordering guarantee from \"Files are synced in the order \nfsync() is called on each\" to \"files are all synced when this call \ncompletes\" would be great. I've been running into this issue in some \nnon-Pg-related work and it's been bugging me.\n\n--\nCraig Ringer\n\n",
"msg_date": "Mon, 16 Jul 2012 08:22:59 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> On 07/16/2012 02:29 AM, Tom Lane wrote:\n>> Yeah, you have a point there. It's not real clear that switching fsync\n>> from off to on is an operation that we can make any guarantees about,\n>> short of executing something like the code recently added to initdb\n>> to force-sync the entire PGDATA tree.\n\n> There's one way that doesn't have any housekeeping cost to Pg. It's \n> pretty bad manners if there's anybody other than Pg on the system though:\n> sync()\n\nYeah, I thought about that: if we could document that issuing a manual\nsync after turning fsync on leaves you in a guaranteed-good state once\nthe sync is complete, it'd probably be fine. However, I'm not convinced\nthat we could promise that with a straight face. In the first place,\nPG has only very weak guarantees about how quickly all processes in the\nsystem will absorb a GUC update. In the second place, I'm not entirely\nsure that there aren't race conditions around checkpoints and the fsync\nrequest queue (particularly if we do what Jeff is suggesting and\nsuppress queuing requests at the upstream end). It might be all right,\nor it might be all right after expending some work, but the whole thing\nis not an area where I think anyone wants to spend time. I think it'd\nbe much safer to document that the correct procedure is \"stop the\ndatabase, do a manual sync, enable fsync in postgresql.conf, restart the\ndatabase\". And if that's what we're documenting, we lose little or\nnothing by marking fsync as PGC_POSTMASTER.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Jul 2012 21:37:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On 07/16/2012 09:37 AM, Tom Lane wrote:\n>> There's one way that doesn't have any housekeeping cost to Pg. It's\n>> pretty bad manners if there's anybody other than Pg on the system though:\n>> sync()\n> Yeah, I thought about that: if we could document that issuing a manual\n> sync after turning fsync on leaves you in a guaranteed-good state once\n> the sync is complete, it'd probably be fine. However, I'm not convinced\n> that we could promise that with a straight face. In the first place,\n> PG has only very weak guarantees about how quickly all processes in the\n> system will absorb a GUC update. In the second place, I'm not entirely\n> sure that there aren't race conditions around checkpoints and the fsync\n> request queue (particularly if we do what Jeff is suggesting and\n> suppress queuing requests at the upstream end). It might be all right,\n> or it might be all right after expending some work, but the whole thing\n> is not an area where I think anyone wants to spend time. I think it'd\n> be much safer to document that the correct procedure is \"stop the\n> database, do a manual sync, enable fsync in postgresql.conf, restart the\n> database\". And if that's what we're documenting, we lose little or\n> nothing by marking fsync as PGC_POSTMASTER.\nSounds reasonable to me; I tend to view fsync=off as a testing feature \nanyway. Will clone onto -general and see if anyone yells.\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Mon, 16 Jul 2012 09:43:02 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Hi all\n\nSome performance improvements have been proposed - probably for 9.3 - \nthat will mean the `fsync' GUC can only be changed with a full cluster \nrestart. See quoted, at end of message.\n\nIt is currently possible to change `fsync' by altering postgresql.conf \nand issuing a `pg_ctl reload' . It is not clear how safe this really is \neven now, and changes proposed to reduce the amount of expensive \nbookkeeping done when fsync is set to 'off' will make it even less safe. \nConsequently, it is proposed that the ability to change the fsync \nsetting while Pg is running be removed.\n\nfsync=off is very unsafe anyway, and these days production setups are \nable to get similar results with async commits and group commit.\n\nIs there anyone here relying on being able to change fsync=off to \nfsync=on at runtime? If so, what for, and what does it gain you over use \nof group/async commit?\n\nFor related discussion see the -hackers thread:\n\n \"DELETE vs TRUNCATE explanation\"\n\n \nhttp://archives.postgresql.org/message-id/CAMkU=1yLXvODRZZ_=fgrEeJfk2tvZPTTD-8n8BwrAhNz_WBT0A@mail.gmail.com\n\n\nand the background threads:\n\n \"PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the \nfastest way to clean each non-empty table and reset unique identifier \ncolumn of empty ones.\"\n\n \nhttp://archives.postgresql.org/message-id/CAFXpGYbgmZYij4TgCbOF24-usoiDD0ASQeaVAkYtB7E2TYm8Wg@mail.gmail.com\n\n \"DELETE vs TRUNCATE explanation\"\n\n http://archives.postgresql.org/message-id/[email protected]\n\n\n\nOn 07/16/2012 09:37 AM, Tom Lane wrote:\n> Craig Ringer <[email protected]> writes:\n>> On 07/16/2012 02:29 AM, Tom Lane wrote:\n>>> Yeah, you have a point there. It's not real clear that switching fsync\n>>> from off to on is an operation that we can make any guarantees about,\n>>> short of executing something like the code recently added to initdb\n>>> to force-sync the entire PGDATA tree.\n>\n>> There's one way that doesn't have any housekeeping cost to Pg. It's\n>> pretty bad manners if there's anybody other than Pg on the system though:\n>> sync()\n>\n> Yeah, I thought about that: if we could document that issuing a manual\n> sync after turning fsync on leaves you in a guaranteed-good state once\n> the sync is complete, it'd probably be fine. However, I'm not convinced\n> that we could promise that with a straight face. In the first place,\n> PG has only very weak guarantees about how quickly all processes in the\n> system will absorb a GUC update. In the second place, I'm not entirely\n> sure that there aren't race conditions around checkpoints and the fsync\n> request queue (particularly if we do what Jeff is suggesting and\n> suppress queuing requests at the upstream end). It might be all right,\n> or it might be all right after expending some work, but the whole thing\n> is not an area where I think anyone wants to spend time. I think it'd\n> be much safer to document that the correct procedure is \"stop the\n> database, do a manual sync, enable fsync in postgresql.conf, restart the\n> database\". And if that's what we're documenting, we lose little or\n> nothing by marking fsync as PGC_POSTMASTER.\n>\n> \t\t\tregards, tom lane\n>\n\n\n",
"msg_date": "Mon, 16 Jul 2012 09:54:44 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Proposed change for 9.3(?): Require full restart to change fsync\n\tparameter, not just pg_ctl reload"
},
{
"msg_contents": "On Sun, Jul 15, 2012 at 2:29 PM, Tom Lane <[email protected]> wrote:\n> I think what we ought to do is bite the bullet and refactor the\n> representation of the pendingOps table. What I'm thinking about\n> is reducing the hash key to just RelFileNodeBackend + ForkNumber,\n> so that there's one hashtable entry per fork, and then storing a\n> bitmap to indicate which segment numbers need to be sync'd. At\n> one gigabyte to the bit, I think we could expect the bitmap would\n> not get terribly large. We'd still have a \"cancel\" flag in each\n> hash entry, but it'd apply to the whole relation fork not each\n> segment.\n\nI think this is a good idea.\n\n>> Also, I still wonder if it is worth memorizing fsyncs (under\n>> fsync=off) that may or may not ever take place. Is there any\n>> guarantee that we can make by doing so, that couldn't be made\n>> otherwise?\n>\n> Yeah, you have a point there. It's not real clear that switching fsync\n> from off to on is an operation that we can make any guarantees about,\n> short of executing something like the code recently added to initdb\n> to force-sync the entire PGDATA tree. Perhaps we should change fsync\n> to be PGC_POSTMASTER (ie frozen at postmaster start), and then we could\n> skip forwarding fsync requests when it's off?\n\nI am emphatically opposed to making fsync PGC_POSTMASTER. Being able\nto change parameters on the fly without having to shut down the system\nis important, and we should be looking for ways to make it possible to\nchange more things on-the-fly, not arbitrarily restricting GUCs that\nalready exist. This is certainly one I've changed on the fly, and I'm\nwilling to bet there are real-world users out there who have done the\nsame (e.g. to survive an unexpected load spike).\n\nI would argue that such a change adds no measure of safety, anyway.\nSuppose we have the following sequence of events, starting with\nfsync=off:\n\nT0: write\nT1: checkpoint (fsync of T0 skipped since fsync=off)\nT2: write\nT3: fsync=on\nT4: checkpoint (fsync of T2 performed)\n\nWhy is it OK to fsync the write at T2 but not the one at T0? In order\nfor the system to become crash-safe, the user will need to guarantee,\nat some point following T3, that the entire OS buffer cache has been\nflushed to disk. Whether or not the fsync of T2 happened is\nirrelevant. Had we chosen not to send an fsync request at all at time\nT2, the user's obligations following T3 would be entirely unchanged.\nThus, I see no reason why we need to restrict the fsync setting in\norder to implement the proposed optimization.\n\nBut, at a broader level, I am not very excited about this\noptimization. It seems to me that if this is hurting enough to be\nnoticeable, then it's hurting us when fsync=on as well, and we had\nmaybe think a little harder about how to cut down on the IPC overhead.\n If the bgwriter comm lock is contended, we could partition it - e.g.\nby giving each backend a small queue protected by the backendLock,\nwhich is flushed into the main queue when it fills and harvested by\nthe bgwriter once per checkpoint cycle. (This is the same principle\nas the fast-path locking stuff that we used to eliminate lmgr\ncontention on short read-only queries in 9.2.) If we only fix it for\nthe fsync=off case, then what about people who are running with\nfsync=on but have extremely fast fsyncs? Most of us probably don't\nhave the hardware to test that today but it's certainly out there and\nwill probably become more common in the future.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 16 Jul 2012 11:58:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Sun, Jul 15, 2012 at 2:29 PM, Tom Lane <[email protected]> wrote:\n>> Yeah, you have a point there. It's not real clear that switching fsync\n>> from off to on is an operation that we can make any guarantees about,\n>> short of executing something like the code recently added to initdb\n>> to force-sync the entire PGDATA tree. Perhaps we should change fsync\n>> to be PGC_POSTMASTER (ie frozen at postmaster start), and then we could\n>> skip forwarding fsync requests when it's off?\n\n> I would argue that such a change adds no measure of safety, anyway.\n\nWell, yes it does, and the reason was explained further down in the\nthread: since we have no particular guarantees as to how quickly\npostmaster children will absorb postgresql.conf updates, there could be\nindividual processes still running with fsync = off long after the user\nthinks he's turned it on. A forced restart solves that. I believe the\nreason for the current coding in the fsync queuing stuff is so that you\nonly have to worry about how long it takes the checkpointer to notice\nthe GUC change, and not any random backend that's running a forty-hour\nquery.\n\n> But, at a broader level, I am not very excited about this\n> optimization. It seems to me that if this is hurting enough to be\n> noticeable, then it's hurting us when fsync=on as well, and we had\n> maybe think a little harder about how to cut down on the IPC overhead.\n\nUh, that's exactly what's under discussion. Not sending useless fsync\nrequests when fsync is off is just one part of it; a part that happens\nto be quite useful for some test scenarios, even if not so much for\nproduction. (IIRC, the original complainant in this thread was running\nfsync off.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2012 12:08:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Mon, Jul 16, 2012 at 12:08 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Sun, Jul 15, 2012 at 2:29 PM, Tom Lane <[email protected]> wrote:\n>>> Yeah, you have a point there. It's not real clear that switching fsync\n>>> from off to on is an operation that we can make any guarantees about,\n>>> short of executing something like the code recently added to initdb\n>>> to force-sync the entire PGDATA tree. Perhaps we should change fsync\n>>> to be PGC_POSTMASTER (ie frozen at postmaster start), and then we could\n>>> skip forwarding fsync requests when it's off?\n>\n>> I would argue that such a change adds no measure of safety, anyway.\n>\n> Well, yes it does, and the reason was explained further down in the\n> thread: since we have no particular guarantees as to how quickly\n> postmaster children will absorb postgresql.conf updates, there could be\n> individual processes still running with fsync = off long after the user\n> thinks he's turned it on. A forced restart solves that. I believe the\n> reason for the current coding in the fsync queuing stuff is so that you\n> only have to worry about how long it takes the checkpointer to notice\n> the GUC change, and not any random backend that's running a forty-hour\n> query.\n\nHrmf, I guess that's a fair point. But if we believe that reasoning\nthen I think it's an argument for sending fsync requests even when\nfsync=off, not for making fsync PGC_POSTMASTER. Or maybe we could\nstore the current value of the fsync flag in shared memory somewhere\nand have backends check it before deciding whether to enqueue a\nrequest. With proper use of memory barriers it should be possible to\nmake this work without requiring a lock.\n\n>> But, at a broader level, I am not very excited about this\n>> optimization. It seems to me that if this is hurting enough to be\n>> noticeable, then it's hurting us when fsync=on as well, and we had\n>> maybe think a little harder about how to cut down on the IPC overhead.\n>\n> Uh, that's exactly what's under discussion. Not sending useless fsync\n> requests when fsync is off is just one part of it; a part that happens\n> to be quite useful for some test scenarios, even if not so much for\n> production. (IIRC, the original complainant in this thread was running\n> fsync off.)\n\nMy point is that if sending fsync requests is cheap enough, then not\nsending them won't save anything meaningful. And I don't see why it\ncan't be made just that cheap, thereby benefiting people with fsync=on\nas well.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 16 Jul 2012 12:26:06 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Jul 16, 2012 at 12:08 PM, Tom Lane <[email protected]> wrote:\n>> Uh, that's exactly what's under discussion. Not sending useless fsync\n>> requests when fsync is off is just one part of it; a part that happens\n>> to be quite useful for some test scenarios, even if not so much for\n>> production. (IIRC, the original complainant in this thread was running\n>> fsync off.)\n\n> My point is that if sending fsync requests is cheap enough, then not\n> sending them won't save anything meaningful.\n\nWell, that argument is exactly why the code is designed the way it is...\nbut we are now finding out that sending useless fsync requests isn't as\ncheap as all that.\n\nThe larger point here, in any case, is that I don't believe anyone wants\nto expend a good deal of skull sweat and possibly performance on\nensuring that transitioning from fsync off to fsync on in an active\ndatabase is a reliable operation. It does not seem like something we\nare ever going to recommend, and we have surely got nine hundred ninety\nnine other things that are more useful to spend development time on.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2012 12:36:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Mon, Jul 16, 2012 at 12:36 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Mon, Jul 16, 2012 at 12:08 PM, Tom Lane <[email protected]> wrote:\n>>> Uh, that's exactly what's under discussion. Not sending useless fsync\n>>> requests when fsync is off is just one part of it; a part that happens\n>>> to be quite useful for some test scenarios, even if not so much for\n>>> production. (IIRC, the original complainant in this thread was running\n>>> fsync off.)\n>\n>> My point is that if sending fsync requests is cheap enough, then not\n>> sending them won't save anything meaningful.\n>\n> Well, that argument is exactly why the code is designed the way it is...\n> but we are now finding out that sending useless fsync requests isn't as\n> cheap as all that.\n\nI agree, but I think the problem can be solved for a pretty modest\namount of effort without needing to make fsync PGC_POSTMASTER. Your\nproposal to refactor the pendingOpsTable representation seems like it\nwill help a lot. Perhaps you should do that first and then we can\nreassess.\n\n> The larger point here, in any case, is that I don't believe anyone wants\n> to expend a good deal of skull sweat and possibly performance on\n> ensuring that transitioning from fsync off to fsync on in an active\n> database is a reliable operation. It does not seem like something we\n> are ever going to recommend, and we have surely got nine hundred ninety\n> nine other things that are more useful to spend development time on.\n\nWe may not recommend it, but I am sure that people will do it anyway,\nand requiring them to bounce the server in that situation seems\nunfortunate, especially since it will also require them to bounce the\nserver in order to go the other direction.\n\nIn my view, the elephant in the room here is that it's dramatically\ninefficient for every backend to send an fsync request on every block\nwrite. For many users, in many workloads, all of those requests will\nbe for just a tiny handful of relation segments. The fsync queue\ncompaction code works as well as it does for precisely that reason -\nwhen it triggers, we typically can compact a list of thousands or\nmillions of entries down to less than two dozen. In other words, as I\nsee it, the issue here is not so much that 100% of the fsync requests\nare useless when fsync=off, but rather that 99.9% of them are useless\neven when fsync=on.\n\nIn any case, I'm still of the opinion that we ought to try making one\nfix (your proposed refactoring of the pendingOpsTable) and then see\nwhere we're at.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 16 Jul 2012 12:53:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> In my view, the elephant in the room here is that it's dramatically\n> inefficient for every backend to send an fsync request on every block\n> write.\n\nYeah. This was better before the decision was taken to separate\nbgwriter from checkpointer; before that, only local communication was\ninvolved for the bulk of write operations (or at least so we hope).\nI remain less than convinced that that split was really a great idea.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2012 12:57:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Mon, Jul 16, 2012 at 12:57 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> In my view, the elephant in the room here is that it's dramatically\n>> inefficient for every backend to send an fsync request on every block\n>> write.\n>\n> Yeah. This was better before the decision was taken to separate\n> bgwriter from checkpointer; before that, only local communication was\n> involved for the bulk of write operations (or at least so we hope).\n> I remain less than convinced that that split was really a great idea.\n\nUnfortunately, there are lots of important operations (like bulk\nloading, SELECT * FROM bigtable, and VACUUM notverybigtable) that\ninevitably end up writing out their own dirty buffers. And even when\nthe background writer does write something, it's not always clear that\nthis is a positive thing. Here's Greg Smith commenting on the\nmore-is-worse phenonmenon:\n\nhttp://archives.postgresql.org/pgsql-hackers/2012-02/msg00564.php\n\nJeff Janes and I came up with what I believe to be a plausible\nexplanation for the problem:\n\nhttp://archives.postgresql.org/pgsql-hackers/2012-03/msg00356.php\n\nI kinda think we ought to be looking at fixing that for 9.2, and\nperhaps even back-patching further, but nobody else seemed terribly\nexcited about it.\n\nAt any rate, I'm somewhat less convinced that the split was a good\nidea than I was when we did it, mostly because we haven't really gone\nanywhere with it subsequently. But I do think there's a good argument\nthat any process which is responsible for running a system call that\ncan take >30 seconds to return had better not be responsible for\nanything else that matters very much. If background writing is one of\nthe things we do that doesn't matter very much, then we need to figure\nout what's wrong with it (see above) and make it matter more. If it\nalready matters, then it needs to happen continuously and not get\nsuppressed while other tasks (like long fsyncs) are happening, at\nleast not without some evidence that such suppression is the right\nchoice from a performance standpoint.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 16 Jul 2012 14:39:58 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> Unfortunately, there are lots of important operations (like bulk\n> loading, SELECT * FROM bigtable, and VACUUM notverybigtable) that\n> inevitably end up writing out their own dirty buffers. And even when\n> the background writer does write something, it's not always clear that\n> this is a positive thing. Here's Greg Smith commenting on the\n> more-is-worse phenonmenon:\n\n> http://archives.postgresql.org/pgsql-hackers/2012-02/msg00564.php\n\n> Jeff Janes and I came up with what I believe to be a plausible\n> explanation for the problem:\n\n> http://archives.postgresql.org/pgsql-hackers/2012-03/msg00356.php\n\n> I kinda think we ought to be looking at fixing that for 9.2, and\n> perhaps even back-patching further, but nobody else seemed terribly\n> excited about it.\n\nI'd be fine with back-patching something like that into 9.2 if we had\n(a) a patch and (b) experimental evidence that it made things better.\nUnless I missed something, we have neither. Also, I read the above\ntwo messages to say that you, Greg, and Jeff have three different ideas\nabout exactly what should be done, which is less than comforting for\na last-minute patch...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2012 15:03:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> At any rate, I'm somewhat less convinced that the split was a good\n> idea than I was when we did it, mostly because we haven't really gone\n> anywhere with it subsequently.\n\nBTW, while we are on the subject: hasn't this split completely broken\nthe statistics about backend-initiated writes? I don't see anything\nin ForwardFsyncRequest that distinguishes whether it's being called in\nthe bgwriter or a regular backend.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2012 15:18:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Mon, Jul 16, 2012 at 3:18 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> At any rate, I'm somewhat less convinced that the split was a good\n>> idea than I was when we did it, mostly because we haven't really gone\n>> anywhere with it subsequently.\n>\n> BTW, while we are on the subject: hasn't this split completely broken\n> the statistics about backend-initiated writes?\n\nYes, it seems to have done just that. The comment for\nForwardFsyncRequest is a few bricks short of a load too:\n\n * Whenever a backend is compelled to write directly to a relation\n * (which should be seldom, if the checkpointer is getting its job done),\n * the backend calls this routine to pass over knowledge that the relation\n * is dirty and must be fsync'd before next checkpoint. We also use this\n * opportunity to count such writes for statistical purposes.\n\nLine 2 seems to have been mechanically changed from \"background\nwriter\" to \"checkpointer\", but of course it should still say\n\"background writer\" in this case.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 16 Jul 2012 15:26:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> Yes, it seems to have done just that. The comment for\n> ForwardFsyncRequest is a few bricks short of a load too:\n> ...\n> Line 2 seems to have been mechanically changed from \"background\n> writer\" to \"checkpointer\", but of course it should still say\n> \"background writer\" in this case.\n\nYeah, found that one already (it's probably my fault).\n\nWill see about fixing the stats in a separate patch. I just wanted to\nknow if the issue had been dealt with in some non-obvious fashion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Jul 2012 15:46:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Jul 16, 2012 at 3:18 PM, Tom Lane <[email protected]> wrote:\n>> BTW, while we are on the subject: hasn't this split completely broken\n>> the statistics about backend-initiated writes?\n\n> Yes, it seems to have done just that.\n\nSo I went to fix this in the obvious way (attached), but while testing\nit I found that the number of buffers_backend events reported during\na regression test run barely changed; which surprised the heck out of\nme, so I dug deeper. The cause turns out to be extremely scary:\nForwardFsyncRequest isn't getting called at all in the bgwriter process,\nbecause the bgwriter process has a pendingOpsTable. So it just queues\nits fsync requests locally, and then never acts on them, since it never\nruns any checkpoints anymore.\n\nThis implies that nobody has done pull-the-plug testing on either HEAD\nor 9.2 since the checkpointer split went in (2011-11-01), because even\na modicum of such testing would surely have shown that we're failing to\nfsync a significant fraction of our write traffic.\n\nFurthermore, I would say that any performance testing done since then,\nif it wasn't looking at purely read-only scenarios, isn't worth the\nelectrons it's written on. In particular, any performance gain that\nanybody might have attributed to the checkpointer splitup is very\nprobably hogwash.\n\nThis is not giving me a warm feeling about our testing practices.\n\nAs far as fixing the bug is concerned, the reason for the foulup\nis that mdinit() looks to IsBootstrapProcessingMode() to decide\nwhether to create a pendingOpsTable. That probably was all right\nwhen it was coded, but what it means today is that *any* process\nstarted via AuxiliaryProcessMain will have one; thus not only do\nbgwriters have one, but so do walwriter and walreceiver processes;\nwhich might not represent a bug today but it's pretty scary anyway.\nI think we need to fix that so it's more directly dependent on the\nauxiliary process type. We can't use flags set by the respective\nFooMain() functions, such as am_bg_writer, because mdinit is called\nfrom BaseInit() which happens before reaching those functions.\nMy suggestion is that bootstrap.c ought to make the process's\nAuxProcType value available and then mdinit should consult that to\ndecide what to do. (Having done that, we might consider getting rid\nof the \"retail\" process-type flags am_bg_writer etc.)\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 17 Jul 2012 18:56:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Checkpointer split has broken things dramatically (was Re: DELETE vs\n\tTRUNCATE explanation)"
},
{
"msg_contents": "On 17 July 2012 23:56, Tom Lane <[email protected]> wrote:\n> This implies that nobody has done pull-the-plug testing on either HEAD\n> or 9.2 since the checkpointer split went in (2011-11-01), because even\n> a modicum of such testing would surely have shown that we're failing to\n> fsync a significant fraction of our write traffic.\n>\n> Furthermore, I would say that any performance testing done since then,\n> if it wasn't looking at purely read-only scenarios, isn't worth the\n> electrons it's written on. In particular, any performance gain that\n> anybody might have attributed to the checkpointer splitup is very\n> probably hogwash.\n>\n> This is not giving me a warm feeling about our testing practices.\n\nThe checkpointer slit-up was not justified as a performance\noptimisation so much as a re-factoring effort that might have some\nconcomitant performance benefits. While I agree that it is regrettable\nthat this was allowed to go undetected for so long, I do not find it\nespecially surprising that some performance testing results post-split\ndidn't strike somebody as fool's gold. Much of the theory surrounding\ncheckpoint tuning, if followed, results in relatively little work\nbeing done during the sync phase of a checkpoint, especially if an I/O\nscheduler like deadline is used.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n",
"msg_date": "Wed, 18 Jul 2012 00:48:50 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On 07/18/2012 06:56 AM, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> On Mon, Jul 16, 2012 at 3:18 PM, Tom Lane <[email protected]> wrote:\n>>> BTW, while we are on the subject: hasn't this split completely broken\n>>> the statistics about backend-initiated writes?\n>> Yes, it seems to have done just that.\n> So I went to fix this in the obvious way (attached), but while testing\n> it I found that the number of buffers_backend events reported during\n> a regression test run barely changed; which surprised the heck out of\n> me, so I dug deeper. The cause turns out to be extremely scary:\n> ForwardFsyncRequest isn't getting called at all in the bgwriter process,\n> because the bgwriter process has a pendingOpsTable. So it just queues\n> its fsync requests locally, and then never acts on them, since it never\n> runs any checkpoints anymore.\n>\n> This implies that nobody has done pull-the-plug testing on either HEAD\n> or 9.2 since the checkpointer split went in (2011-11-01)\n\nThat makes me wonder if on top of the buildfarm, extending some \nbuildfarm machines into a \"crashfarm\" is needed:\n\n- Keep kvm instances with copy-on-write snapshot disks and the build env \non them\n- Fire up the VM, do a build, and start the server\n- From outside the vm have the test controller connect to the server and \nstart a test run\n- Hard-kill the OS instance at a random point in time.\n- Start the OS instance back up\n- Start Pg back up and connect to it again\n- From the test controller, test the Pg install for possible corruption \nby reading the indexes and tables, doing some test UPDATEs, etc.\n\nThe main challenge would be coming up with suitable tests to run, ones \nthat could then be checked to make sure nothing was broken. The test \ncontroller would know how far a test got before the OS got killed and \nwould know which test it was running, so it'd be able to check for \nexpected data if provided with appropriate test metadata. Use of enable_ \nflags should permit scans of indexes and table heaps to be forced.\n\nWhat else should be checked? The main thing that comes to mind for me is \nsomething I've worried about for a while: that Pg might not always \nhandle out-of-disk-space anywhere near as gracefully as it's often \nclaimed to. There's no automated testing for that, so it's hard to \nreally know. A harnessed VM could be used to test that. Instead of \nvirtual plug pull tests it could generate a virtual disk of constrained \nrandom size, run its tests until out-of-disk caused failure, stop Pg, \nexpand the disk, restart Pg, and run its checks.\n\nVariants where WAL was on a separate disk and only WAL or only the main \nnon-WAL disk run out of space would also make sense and be easy to \nproduce with such a harness.\n\nI've written some automated kvm test harnesses, so I could have a play \nwith this idea. I would probably need some help with the test design, \nthough, and the guest OS would be Linux, Linux, or Linux at least to \nstart with.\n\nOpinions?\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 18 Jul 2012 08:13:19 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was Re: DELETE\n\tvs TRUNCATE explanation)"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> On 07/18/2012 06:56 AM, Tom Lane wrote:\n>> This implies that nobody has done pull-the-plug testing on either HEAD\n>> or 9.2 since the checkpointer split went in (2011-11-01)\n\n> That makes me wonder if on top of the buildfarm, extending some \n> buildfarm machines into a \"crashfarm\" is needed:\n\nNot sure if we need a whole \"farm\", but certainly having at least one\nmachine testing this sort of stuff on a regular basis would make me feel\na lot better.\n\n> The main challenge would be coming up with suitable tests to run, ones \n> that could then be checked to make sure nothing was broken.\n\nOne fairly simple test scenario could go like this:\n\n\t* run the regression tests\n\t* pg_dump the regression database\n\t* run the regression tests again\n\t* hard-kill immediately upon completion\n\t* restart database, allow it to perform recovery\n\t* pg_dump the regression database\n\t* diff previous and new dumps; should be the same\n\nThe main thing this wouldn't cover is discrepancies in user indexes,\nsince pg_dump doesn't do anything that's likely to result in indexscans\non user tables. It ought to be enough to detect the sort of system-wide\nproblem we're talking about here, though.\n\nIn general I think the hard part is automated reproduction of an\nOS-crash scenario, but your ideas about how to do that sound promising.\nOnce we have that going, it shouldn't be hard to come up with tests\nof the form \"do X, hard-crash, recover, check X still looks sane\".\n\n> What else should be checked? The main thing that comes to mind for me is \n> something I've worried about for a while: that Pg might not always \n> handle out-of-disk-space anywhere near as gracefully as it's often \n> claimed to.\n\n+1\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Jul 2012 20:31:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Checkpointer split has broken things dramatically (was Re:\n\tDELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On 07/16/2012 02:39 PM, Robert Haas wrote:\n> Unfortunately, there are lots of important operations (like bulk\n> loading, SELECT * FROM bigtable, and VACUUM notverybigtable) that\n> inevitably end up writing out their own dirty buffers. And even when\n> the background writer does write something, it's not always clear that\n> this is a positive thing. Here's Greg Smith commenting on the\n> more-is-worse phenonmenon:\n>\n> http://archives.postgresql.org/pgsql-hackers/2012-02/msg00564.php\n\nYou can add \"crash recovery\" to the list of things where the interaction \nwith the OS write cache matters a lot too, something I just took a \nbeating and learned from recently. Since the recovery process is \nessentially one giant unified backend, how effectively the background \nwriter and/or checkpointer move writes from recovery to themselves is \nreally important. It's a bit easier to characterize than a complicated \nmixed set of clients, which has given me a couple of ideas to chase down.\n\nWhat I've been doing for much of the last month (instead of my original \nplan of reviewing patches) is moving toward the bottom of characterizing \nthat under high pressure. It provides an even easier way to compare \nmultiple write strategies at the OS level than regular pgbench-like \nbenchmarks. Recovery playback with a different tuning becomes as simple \nas rolling back to a simple base backup and replaying all the WAL, \npossibly including some number of bulk operations that showed up. You \ncan measure that speed instead of transaction-level throughput. I'm \nseeing the same ~100% difference in performance between various Linux \ntunings on recovery as I was getting on VACUUM tests, and it's a whole \nlot easier to setup and (ahem) replicate the results. I'm putting \ntogether a playback time benchmark based on this observation.\n\nThe fact that I have servers all over the place now with >64GB worth of \nRAM has turned the topic of how much dirty memory should be used for \nwrite caching into a hot item for me again in general too. If I live \nthrough 9.3 development, I expect to have a lot more ideas about how to \ndeal with this whole area play out in the upcoming months. I could \nreally use a cool day to sit outside thinking about it right now.\n\n> Jeff Janes and I came up with what I believe to be a plausible\n> explanation for the problem:\n>\n> http://archives.postgresql.org/pgsql-hackers/2012-03/msg00356.php\n>\n> I kinda think we ought to be looking at fixing that for 9.2, and\n> perhaps even back-patching further, but nobody else seemed terribly\n> excited about it.\n\nFYI, I never rejected any of that thinking, I just haven't chewed on \nwhat you two were proposing. If that's still something you think should \nbe revisited for 9.2, I'll take a longer look at it. My feeling on this \nso far has really been that the write blocking issues are much larger \nthan the exact logic used by the background writer during the code you \nwere highlighting, which I always saw as more active/important during \nidle periods. This whole area needs to get a complete overhaul during \n9.3 though, especially since there are plenty of people who want to fit \nchecksum writes into that path too.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n",
"msg_date": "Tue, 17 Jul 2012 23:22:22 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On 07/17/2012 06:56 PM, Tom Lane wrote:\n> So I went to fix this in the obvious way (attached), but while testing\n> it I found that the number of buffers_backend events reported during\n> a regression test run barely changed; which surprised the heck out of\n> me, so I dug deeper. The cause turns out to be extremely scary:\n> ForwardFsyncRequest isn't getting called at all in the bgwriter process,\n> because the bgwriter process has a pendingOpsTable.\n\nWhen I did my testing early this year to look at checkpointer \nperformance (among other 9.2 write changes like group commit), I did see \nsome cases where buffers_backend was dramatically different on 9.2 vs. \n9.1 There were plenty of cases where the totals across a 10 minute \npgbench were almost identical though, so this issue didn't stick out \nthen. That's a very different workload than the regression tests though.\n\n> This implies that nobody has done pull-the-plug testing on either HEAD\n> or 9.2 since the checkpointer split went in (2011-11-01), because even\n> a modicum of such testing would surely have shown that we're failing to\n> fsync a significant fraction of our write traffic.\n\nUgh. Most of my pull the plug testing the last six months has been \nfocused on SSD tests with older versions. I want to duplicate this (and \nany potential fix) now that you've highlighted it.\n\n> Furthermore, I would say that any performance testing done since then,\n> if it wasn't looking at purely read-only scenarios, isn't worth the\n> electrons it's written on. In particular, any performance gain that\n> anybody might have attributed to the checkpointer splitup is very\n> probably hogwash.\n\nThere hasn't been any performance testing that suggested the \ncheckpointer splitup was justified. The stuff I did showed it being \nflat out negative for a subset of pgbench oriented cases, which didn't \nseem real-world enough to disprove it as the right thing to do though.\n\nI thought there were two valid justifications for the checkpointer split \n(which is not a feature I have any corporate attachment to--I'm as \nisolated from how it was developed as you are). The first is that it \nseems like the right architecture to allow reworking checkpoints and \nbackground writes for future write path optimization. A good chunk of \nthe time when I've tried to improve one of those (like my spread sync \nstuff from last year), the code was complicated by the background writer \nneeding to follow the drum of checkpoint timing, and vice-versa. Being \nable to hack on those independently got a sign of relief from me. And \nwhile this adds some code duplication in things like the process setup, \nI thought the result would be cleaner for people reading the code to \nfollow too. This problem is terrible, but I think part of how it crept \nin is that the single checkpoint+background writer process was doing way \ntoo many things to even follow all of them some days.\n\nThe second justification for the split was that it seems easier to get a \nlow power result from, which I believe was the angle Peter Geoghegan was \nworking when this popped up originally. The checkpointer has to run \nsometimes, but only at a 50% duty cycle as it's tuned out of the box. \nIt seems nice to be able to approach that in a way that's power \nefficient without coupling it to whatever heartbeat the BGW is running \nat. I could even see people changing the frequencies for each \nindependently depending on expected system load. Tune for lower power \nwhen you don't expect many users, that sort of thing.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 18 Jul 2012 00:00:08 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On 07/18/2012 12:00 PM, Greg Smith wrote:\n\n> The second justification for the split was that it seems easier to get \n> a low power result from, which I believe was the angle Peter Geoghegan \n> was working when this popped up originally. The checkpointer has to \n> run sometimes, but only at a 50% duty cycle as it's tuned out of the \n> box. It seems nice to be able to approach that in a way that's power \n> efficient without coupling it to whatever heartbeat the BGW is running \n> at. I could even see people changing the frequencies for each \n> independently depending on expected system load. Tune for lower power \n> when you don't expect many users, that sort of thing.\n>\nYeah - I'm already seeing benefits from that on my laptop, with much \nless need to stop Pg when I'm not using it.\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Wed, 18 Jul 2012 12:20:39 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On 07/18/2012 08:31 AM, Tom Lane wrote:\n> Not sure if we need a whole \"farm\", but certainly having at least one\n> machine testing this sort of stuff on a regular basis would make me feel\n> a lot better.\n\nOK. That's something I can actually be useful for.\n\nMy current qemu/kvm test harness control code is in Python since that's \nwhat all the other tooling for the project I was using it for is in. Is \nit likely to be useful for me to adapt that code for use for a Pg \ncrash-test harness, or will you need a particular tool/language to be \nused? If so, which/what? I'll do pretty much anything except Perl. I'll \nhave a result for you more quickly working in Python, though I'm happy \nenough to write it in C (or Java, but I'm guessing that won't get any \nenthusiasm around here).\n\n> One fairly simple test scenario could go like this:\n>\n> \t* run the regression tests\n> \t* pg_dump the regression database\n> \t* run the regression tests again\n> \t* hard-kill immediately upon completion\n> \t* restart database, allow it to perform recovery\n> \t* pg_dump the regression database\n> \t* diff previous and new dumps; should be the same\n>\n> The main thing this wouldn't cover is discrepancies in user indexes,\n> since pg_dump doesn't do anything that's likely to result in indexscans\n> on user tables. It ought to be enough to detect the sort of system-wide\n> problem we're talking about here, though.\n\nIt also won't detect issues that only occur during certain points in \nexecution, under concurrent load, etc. Still, a start, and I could look \nat extending it into some kind of \"crash fuzzing\" once the basics were \nworking.\n\n> In general I think the hard part is automated reproduction of an\n> OS-crash scenario, but your ideas about how to do that sound promising.\n\nIt's worked well for other testing I've done. Any writes that're still \nin the guest OS's memory, write queues, etc are lost when kvm is killed, \njust like a hard crash. Anything the kvm guest has flushed to \"disk\" is \non the host and preserved - either on the host's disks \n(cache=writethrough) or at least in dirty writeback buffers in ram \n(cache=writeback).\n\nkvm can even do a decent job of simulating a BBU-equipped write-through \nvolume by allowing the host OS to do write-back caching of KVM's backing \ndevice/files. You don't get to set a max write-back cache size directly, \nbut Linux I/O writeback settings provide some control.\n\nMy favourite thing about kvm is that it's just another command. It can \nbe run headless and controlled via virtual serial console and/or its \nmonitor socket. It doesn't require special privileges and can operate on \nordinary files. It's very well suited for hooking into test harnesses.\n\nThe only challenge with using kvm/qemu is that there have been some \nbreaking changes and a couple of annoying bugs that mean I won't be able \nto support anything except pretty much the latest versions initially. \nkvm is easy to compile and has limited dependencies, so I don't expect \nthat to be an issue, but thought it was worth raising.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 18 Jul 2012 12:57:53 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Checkpointer split has broken things dramatically\n\t(was Re: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> On 07/17/2012 06:56 PM, Tom Lane wrote:\n>> Furthermore, I would say that any performance testing done since then,\n>> if it wasn't looking at purely read-only scenarios, isn't worth the\n>> electrons it's written on. In particular, any performance gain that\n>> anybody might have attributed to the checkpointer splitup is very\n>> probably hogwash.\n\n> There hasn't been any performance testing that suggested the \n> checkpointer splitup was justified. The stuff I did showed it being \n> flat out negative for a subset of pgbench oriented cases, which didn't \n> seem real-world enough to disprove it as the right thing to do though.\n\nJust to clarify, I'm not saying that this means we should revert the\ncheckpointer split. What I *am* worried about is that we may have been\nhacking other things on the basis of faulty performance tests.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2012 01:56:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was Re: DELETE\n\tvs TRUNCATE explanation)"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> On 07/18/2012 08:31 AM, Tom Lane wrote:\n>> Not sure if we need a whole \"farm\", but certainly having at least one\n>> machine testing this sort of stuff on a regular basis would make me feel\n>> a lot better.\n\n> OK. That's something I can actually be useful for.\n\n> My current qemu/kvm test harness control code is in Python since that's \n> what all the other tooling for the project I was using it for is in. Is \n> it likely to be useful for me to adapt that code for use for a Pg \n> crash-test harness, or will you need a particular tool/language to be \n> used? If so, which/what? I'll do pretty much anything except Perl. I'll \n> have a result for you more quickly working in Python, though I'm happy \n> enough to write it in C (or Java, but I'm guessing that won't get any \n> enthusiasm around here).\n\nIf we were talking about code that was going to end up in the PG\ndistribution, I'd kind of want it to be in C or Perl, just to keep down\nthe number of languages we're depending on. However, it's not obvious\nthat a tool like this would ever go into our distribution. I'd suggest\nworking with what you're comfortable with, and we can worry about\ntranslation when and if there's a reason to.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2012 02:00:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Checkpointer split has broken things dramatically (was Re:\n\tDELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On 18.07.2012 02:48, Peter Geoghegan wrote:\n> On 17 July 2012 23:56, Tom Lane<[email protected]> wrote:\n>> This implies that nobody has done pull-the-plug testing on either HEAD\n>> or 9.2 since the checkpointer split went in (2011-11-01), because even\n>> a modicum of such testing would surely have shown that we're failing to\n>> fsync a significant fraction of our write traffic.\n>>\n>> Furthermore, I would say that any performance testing done since then,\n>> if it wasn't looking at purely read-only scenarios, isn't worth the\n>> electrons it's written on. In particular, any performance gain that\n>> anybody might have attributed to the checkpointer splitup is very\n>> probably hogwash.\n>>\n>> This is not giving me a warm feeling about our testing practices.\n>\n> The checkpointer slit-up was not justified as a performance\n> optimisation so much as a re-factoring effort that might have some\n> concomitant performance benefits.\n\nAgreed, but it means that we need to re-run the tests that were done to \nmake sure the extra fsync-request traffic is not causing a performance \nregression, \nhttp://archives.postgresql.org/pgsql-hackers/2011-10/msg01321.php.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 18 Jul 2012 10:30:40 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 6:56 PM, Tom Lane <[email protected]> wrote:\n> So I went to fix this in the obvious way (attached), but while testing\n> it I found that the number of buffers_backend events reported during\n> a regression test run barely changed; which surprised the heck out of\n> me, so I dug deeper. The cause turns out to be extremely scary:\n> ForwardFsyncRequest isn't getting called at all in the bgwriter process,\n> because the bgwriter process has a pendingOpsTable. So it just queues\n> its fsync requests locally, and then never acts on them, since it never\n> runs any checkpoints anymore.\n\n:-(\n\n> This implies that nobody has done pull-the-plug testing on either HEAD\n> or 9.2 since the checkpointer split went in (2011-11-01), because even\n> a modicum of such testing would surely have shown that we're failing to\n> fsync a significant fraction of our write traffic.\n>\n> Furthermore, I would say that any performance testing done since then,\n> if it wasn't looking at purely read-only scenarios, isn't worth the\n> electrons it's written on. In particular, any performance gain that\n> anybody might have attributed to the checkpointer splitup is very\n> probably hogwash.\n\nI don't think anybody thought that was going to result in a direct\nperformance gain, but I agree the performance testing needs to be\nredone. I suspect that the impact on my testing is limited, because I\ndo mostly pgbench testing, and the lost fsync requests were probably\nduplicated by non-lost fsync requests from backend writes. But I\nagree that it needs to be redone once this is fixed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 18 Jul 2012 08:26:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was Re: DELETE\n\tvs TRUNCATE explanation)"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Jul 16, 2012 at 12:36 PM, Tom Lane <[email protected]> wrote:\n>> Well, that argument is exactly why the code is designed the way it is...\n>> but we are now finding out that sending useless fsync requests isn't as\n>> cheap as all that.\n\n> I agree, but I think the problem can be solved for a pretty modest\n> amount of effort without needing to make fsync PGC_POSTMASTER. Your\n> proposal to refactor the pendingOpsTable representation seems like it\n> will help a lot. Perhaps you should do that first and then we can\n> reassess.\n> ...\n> In my view, the elephant in the room here is that it's dramatically\n> inefficient for every backend to send an fsync request on every block\n> write. For many users, in many workloads, all of those requests will\n> be for just a tiny handful of relation segments. The fsync queue\n> compaction code works as well as it does for precisely that reason -\n> when it triggers, we typically can compact a list of thousands or\n> millions of entries down to less than two dozen. In other words, as I\n> see it, the issue here is not so much that 100% of the fsync requests\n> are useless when fsync=off, but rather that 99.9% of them are useless\n> even when fsync=on.\n\n> In any case, I'm still of the opinion that we ought to try making one\n> fix (your proposed refactoring of the pendingOpsTable) and then see\n> where we're at.\n\nI've been chewing on this issue some more, and no longer like my\nprevious proposal, which was\n\n>>> ... What I'm thinking about\n>>> is reducing the hash key to just RelFileNodeBackend + ForkNumber,\n>>> so that there's one hashtable entry per fork, and then storing a\n>>> bitmap to indicate which segment numbers need to be sync'd. At\n>>> one gigabyte to the bit, I think we could expect the bitmap would\n>>> not get terribly large. We'd still have a \"cancel\" flag in each\n>>> hash entry, but it'd apply to the whole relation fork not each\n>>> segment.\n\nThe reason that's not so attractive is the later observation that what\nwe really care about optimizing is FORGET_RELATION_FSYNC for all the\nforks of a relation at once, which we could produce just one request\nfor with trivial refactoring of smgrunlink/mdunlink. The above\nrepresentation doesn't help for that. So what I'm now thinking is that\nwe should create a second hash table, with key RelFileNode only,\ncarrying two booleans: a cancel-previous-fsyncs bool and a\nplease-unlink-after-checkpoint bool. (The latter field would allow us\nto drop the separate pending-unlinks data structure.) Entries would\nbe made in this table when we got a FORGET_RELATION_FSYNC or\nUNLINK_RELATION_REQUEST message -- note that in 99% of cases we'd get\nboth message types for each relation, since they're both created during\nDROP. (Maybe we could even combine these request types.) To use the\ntable, as we scan the existing per-fork-and-segment hash table, we'd\nhave to do a lookup in the per-relation table to see if there was a\nlater cancel message for that relation. Now this does add a few cycles\nto the processing of each pendingOpsTable entry in mdsync ... but\nconsidering that the major work in that loop is an fsync call, it is\ntough to believe that anybody would notice an extra hashtable lookup.\n\nHowever, I also came up with an entirely different line of thought,\nwhich unfortunately seems incompatible with either of the improved\ntable designs above. It is this: instead of having a request queue\nthat feeds into a hash table hidden within the checkpointer process,\nwhat about storing the pending-fsyncs table as a shared hash table\nin shared memory? That is, ForwardFsyncRequest would not simply\ntry to add the request to a linear array, but would do a HASH_ENTER\ncall on a shared hash table. This means the de-duplication occurs\nfor free and we no longer need CompactCheckpointerRequestQueue at all.\nBasically, this would amount to saying that the original design was\nwrong to try to micro-optimize the time spent in ForwardFsyncRequest,\nand that we'd rather pay a little more per ForwardFsyncRequest call\nto avoid the enormous response-time spike that will occur when\nCompactCheckpointerRequestQueue has to run. (Not to mention that\nthe checkpointer would eventually have to do HASH_ENTER anyway.)\nI think this would address your observation above that the request\nqueue tends to contain an awful lot of duplicates.\n\nBut I only see how to make that work with the existing hash table\nstructure, because with either of the other table designs, it's\ndifficult to set a predetermined limit on the amount of shared\nmemory needed. The segment-number bitmaps could grow uncomfortably\nlarge in the first design, while in the second there's no good way\nto know how large the per-relation table has to be to cover a given\nsize for the per-fork-and-segment table. (The sore spot here is that\nonce we've accepted a per-fork entry, failing to record a relation-level\ncancel for it is not an option, so we can't just return failure.)\n\nSo if we go that way it seems like we still have the problem of\nhaving to do hash_seq_search to implement a cancel. We could\npossibly arrange for that to be done under shared rather than\nexclusive lock of the hash table, but nonetheless it's not\nreally fixing the originally complained-of O(N^2) problem.\n\nAnother issue, which might be fatal to the whole thing, is that\nit's not clear that a shared hash table similar in size to the\nexisting request array is big enough. The entries basically need\nto live for about one checkpoint cycle, and with a slow cycle\nyou could need an arbitrarily large number of them.\n\nA variant that might work a little better is to keep the main\nrequest table still in checkpointer private memory, but to have\n*both* a small hash table and a request queue in shared memory.\nThe idea is that you first try to enter your request in the hash\ntable; if successful, done (and de-duping has happened automatically).\nIf no room left in the hash table, add it to the request queue as\nnormal. The checkpointer periodically empties both the hash table\nand the queue. The hash table probably doesn't have to be too huge\nto be effective at de-duping requests ... but having said that,\nI have no idea exactly how to size it.\n\nSo that's a brain dump of some half baked ideas. Thoughts anyone?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2012 17:17:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On 17 July 2012 23:56, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Mon, Jul 16, 2012 at 3:18 PM, Tom Lane <[email protected]> wrote:\n>>> BTW, while we are on the subject: hasn't this split completely broken\n>>> the statistics about backend-initiated writes?\n>\n>> Yes, it seems to have done just that.\n>\n> So I went to fix this in the obvious way (attached), but while testing\n> it I found that the number of buffers_backend events reported during\n> a regression test run barely changed; which surprised the heck out of\n> me, so I dug deeper. The cause turns out to be extremely scary:\n> ForwardFsyncRequest isn't getting called at all in the bgwriter process,\n> because the bgwriter process has a pendingOpsTable. So it just queues\n> its fsync requests locally, and then never acts on them, since it never\n> runs any checkpoints anymore.\n>\n> This implies that nobody has done pull-the-plug testing on either HEAD\n> or 9.2 since the checkpointer split went in (2011-11-01), because even\n> a modicum of such testing would surely have shown that we're failing to\n> fsync a significant fraction of our write traffic.\n\nThat problem was reported to me on list some time ago, and I made note\nto fix that after last CF.\n\nI added a note to 9.2 open items about it myself, but it appears my\nfix was too simple and fixed only the reported problem not the\nunderlying issue. Reading your patch gave me strong deja vu, so not\nsure what happened there.\n\nNot very good from me. Feel free to thwack me to fix such things if I\nseem not to respond quickly enough.\n\nI'm now looking at the other open items in my area.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n",
"msg_date": "Wed, 18 Jul 2012 22:45:08 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On Wed, Jul 18, 2012 at 5:17 PM, Tom Lane <[email protected]> wrote:\n> I've been chewing on this issue some more, and no longer like my\n> previous proposal, which was\n>\n>>>> ... What I'm thinking about\n>>>> is reducing the hash key to just RelFileNodeBackend + ForkNumber,\n>>>> so that there's one hashtable entry per fork, and then storing a\n>>>> bitmap to indicate which segment numbers need to be sync'd. At\n>>>> one gigabyte to the bit, I think we could expect the bitmap would\n>>>> not get terribly large. We'd still have a \"cancel\" flag in each\n>>>> hash entry, but it'd apply to the whole relation fork not each\n>>>> segment.\n>\n> The reason that's not so attractive is the later observation that what\n> we really care about optimizing is FORGET_RELATION_FSYNC for all the\n> forks of a relation at once, which we could produce just one request\n> for with trivial refactoring of smgrunlink/mdunlink. The above\n> representation doesn't help for that. So what I'm now thinking is that\n> we should create a second hash table, with key RelFileNode only,\n> carrying two booleans: a cancel-previous-fsyncs bool and a\n> please-unlink-after-checkpoint bool. (The latter field would allow us\n> to drop the separate pending-unlinks data structure.) Entries would\n> be made in this table when we got a FORGET_RELATION_FSYNC or\n> UNLINK_RELATION_REQUEST message -- note that in 99% of cases we'd get\n> both message types for each relation, since they're both created during\n> DROP. (Maybe we could even combine these request types.) To use the\n> table, as we scan the existing per-fork-and-segment hash table, we'd\n> have to do a lookup in the per-relation table to see if there was a\n> later cancel message for that relation. Now this does add a few cycles\n> to the processing of each pendingOpsTable entry in mdsync ... but\n> considering that the major work in that loop is an fsync call, it is\n> tough to believe that anybody would notice an extra hashtable lookup.\n\nSeems a bit complex, but it might be worth it. Keep in mind that I\neventually want to be able to make an unlogged table logged or a visca\nversa, which will probably entail unlinking just the init fork (for\nthe logged -> unlogged direction).\n\n> However, I also came up with an entirely different line of thought,\n> which unfortunately seems incompatible with either of the improved\n> table designs above. It is this: instead of having a request queue\n> that feeds into a hash table hidden within the checkpointer process,\n> what about storing the pending-fsyncs table as a shared hash table\n> in shared memory? That is, ForwardFsyncRequest would not simply\n> try to add the request to a linear array, but would do a HASH_ENTER\n> call on a shared hash table. This means the de-duplication occurs\n> for free and we no longer need CompactCheckpointerRequestQueue at all.\n> Basically, this would amount to saying that the original design was\n> wrong to try to micro-optimize the time spent in ForwardFsyncRequest,\n> and that we'd rather pay a little more per ForwardFsyncRequest call\n> to avoid the enormous response-time spike that will occur when\n> CompactCheckpointerRequestQueue has to run. (Not to mention that\n> the checkpointer would eventually have to do HASH_ENTER anyway.)\n> I think this would address your observation above that the request\n> queue tends to contain an awful lot of duplicates.\n\nI'm not concerned about the queue *containing* a large number of\nduplicates; I'm concerned about the large number of duplicate\n*requests*. Under either the current system or this proposal, every\ntime we write a block, we must take and release CheckpointerCommLock.\nNow, I have no evidence that there's actually a bottleneck there, but\nif there is, this proposal won't fix it. In fact, I suspect on the\nwhole it would make things worse, because while it's true that\nCompactCheckpointerRequestQueue is expensive, it shouldn't normally be\nhappening at all, because the checkpointer should be draining the\nqueue regularly enough to prevent it from filling. So except when the\nsystem is in the pathological state where the checkpointer becomes\nunresponsive because it's blocked in-kernel on a very long fsync and\nthere is a large amount of simultaneous write activity, each process\nthat acquires CheckpointerCommLock holds it for just long enough to\nslam a few bytes of data into the queue, which is very cheap. I\nsuspect that updating a hash table would be significantly more\nexpensive, and we'd pay whatever that extra overhead is on every fsync\nrequest, not just in the unusual case where we manage to fill the\nqueue. So I don't think this is likely to be a win.\n\nIf you think about the case of an UPDATE statement that hits a large\nnumber of blocks in the same relation, it sends an fsync request for\nevery single block. Really, it's only necessary to send a new fsync\nrequest if the checkpointer has begun a new checkpoint cycle in the\nmeantime; otherwise, the old request is still pending and will cover\nthe new write as well. But there's no way for the backend doing the\nwrites to know whether that's happened, so it just sends a request\nevery time. That's not necessarily a problem, because, again, I have\nno evidence whatsoever that CheckpointerCommLock is contented, or that\nthe overhead of sending those requests is significant. But if it is\nthen we need a solution that does not require acquisition of a\nsystem-wide lwlock on every block write.\n\n> But I only see how to make that work with the existing hash table\n> structure, because with either of the other table designs, it's\n> difficult to set a predetermined limit on the amount of shared\n> memory needed. The segment-number bitmaps could grow uncomfortably\n> large in the first design, while in the second there's no good way\n> to know how large the per-relation table has to be to cover a given\n> size for the per-fork-and-segment table. (The sore spot here is that\n> once we've accepted a per-fork entry, failing to record a relation-level\n> cancel for it is not an option, so we can't just return failure.)\n\nMoreover, even if it were technically an option, we know from\nexperience that failure to absorb fsync requests has disastrous\nperformance consequences.\n\n> So if we go that way it seems like we still have the problem of\n> having to do hash_seq_search to implement a cancel. We could\n> possibly arrange for that to be done under shared rather than\n> exclusive lock of the hash table, but nonetheless it's not\n> really fixing the originally complained-of O(N^2) problem.\n\nYep. In fact it's making it worse, because AIUI the existing\nhash_seq_search calls are happening in backend-private memory while\nholding no lock. Doing it on a shared-memory hash table while holding\na high-traffic LWLock figures to be much worse.\n\n> Another issue, which might be fatal to the whole thing, is that\n> it's not clear that a shared hash table similar in size to the\n> existing request array is big enough. The entries basically need\n> to live for about one checkpoint cycle, and with a slow cycle\n> you could need an arbitrarily large number of them.\n\nYep.\n\n> A variant that might work a little better is to keep the main\n> request table still in checkpointer private memory, but to have\n> *both* a small hash table and a request queue in shared memory.\n> The idea is that you first try to enter your request in the hash\n> table; if successful, done (and de-duping has happened automatically).\n> If no room left in the hash table, add it to the request queue as\n> normal. The checkpointer periodically empties both the hash table\n> and the queue. The hash table probably doesn't have to be too huge\n> to be effective at de-duping requests ... but having said that,\n> I have no idea exactly how to size it.\n\nI think this is just over-engineered. The originally complained-of\nproblem was all about the inefficiency of manipulating the\ncheckpointer's backend-private data structures, right? I don't see\nany particular need to mess with the shared memory data structures at\nall. If you wanted to add some de-duping logic to retail fsync\nrequests, you could probably accomplish that more cheaply by having\neach such request look at the last half-dozen or so items in the queue\nand skip inserting the new request if any of them match the new\nrequest. But I think that'd probably be a net loss, because it would\nmean holding the lock for longer.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 19 Jul 2012 08:56:51 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> Seems a bit complex, but it might be worth it. Keep in mind that I\n> eventually want to be able to make an unlogged table logged or a visca\n> versa, which will probably entail unlinking just the init fork (for\n> the logged -> unlogged direction).\n\nWell, as far as that goes, I don't see a reason why you couldn't unlink\nthe init fork immediately on commit. The checkpointer should not have\nto be involved at all --- there's no reason to send it a FORGET FSYNC\nrequest either, because there shouldn't be any outstanding writes\nagainst an init fork, no?\n\nBut having said that, this does serve as an example that we might\nsomeday want the flexibility to kill individual forks. I was\nintending to kill smgrdounlinkfork altogether, but I'll refrain.\n\n> I think this is just over-engineered. The originally complained-of\n> problem was all about the inefficiency of manipulating the\n> checkpointer's backend-private data structures, right? I don't see\n> any particular need to mess with the shared memory data structures at\n> all. If you wanted to add some de-duping logic to retail fsync\n> requests, you could probably accomplish that more cheaply by having\n> each such request look at the last half-dozen or so items in the queue\n> and skip inserting the new request if any of them match the new\n> request. But I think that'd probably be a net loss, because it would\n> mean holding the lock for longer.\n\nWhat about checking just the immediately previous entry? This would\nat least fix the problem for bulk-load situations, and the cost ought\nto be about negligible compared to acquiring the LWLock.\n\nI have also been wondering about de-duping on the backend side, but\nthe problem is that if a backend remembers its last few requests,\nit doesn't know when that cache has to be cleared because of a new\ncheckpoint cycle starting. We could advertise the current cycle\nnumber in shared memory, but you'd still need to take a lock to\nread it. (If we had memory fence primitives it could be a bit\ncheaper, but I dunno how much.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2012 10:09:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Thu, Jul 19, 2012 at 10:09 AM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> Seems a bit complex, but it might be worth it. Keep in mind that I\n>> eventually want to be able to make an unlogged table logged or a visca\n>> versa, which will probably entail unlinking just the init fork (for\n>> the logged -> unlogged direction).\n>\n> Well, as far as that goes, I don't see a reason why you couldn't unlink\n> the init fork immediately on commit. The checkpointer should not have\n> to be involved at all --- there's no reason to send it a FORGET FSYNC\n> request either, because there shouldn't be any outstanding writes\n> against an init fork, no?\n\nWell, it gets written when it gets created. Some of those writes go\nthrough shared_buffers.\n\n> But having said that, this does serve as an example that we might\n> someday want the flexibility to kill individual forks. I was\n> intending to kill smgrdounlinkfork altogether, but I'll refrain.\n\nIf you want to remove it, it's OK with me. We can always put it back\nlater if it's needed. We have an SCM that allows us to revert\npatches. :-)\n\n> What about checking just the immediately previous entry? This would\n> at least fix the problem for bulk-load situations, and the cost ought\n> to be about negligible compared to acquiring the LWLock.\n\nWell, two things:\n\n1. If a single bulk load is the ONLY activity on the system, or more\ngenerally if only one segment in the system is being heavily written,\nthen that would reduce the number of entries that get added to the\nqueue, but if you're doing two bulk loads on different tables at the\nsame time, then it might not do much. From Greg Smith's previous\ncomments on this topic, I understand that having two or three entries\nalternating in the queue is a fairly common pattern.\n\n2. You say \"fix the problem\" but I'm not exactly clear what problem\nyou think this fixes. It's true that the compaction code is a lot\nslower than an ordinary queue insertion, but I think it generally\ndoesn't happen enough to matter, and when it does happen the system is\ngenerally I/O bound anyway, so who cares? One possible argument in\nfavor of doing something along these lines is that it would reduce the\namount of data that the checkpointer would have to copy while holding\nthe lock, thus causing less disruption for other processes trying to\ninsert into the request queue. But I don't know whether that effect\nis significant enough to matter.\n\n> I have also been wondering about de-duping on the backend side, but\n> the problem is that if a backend remembers its last few requests,\n> it doesn't know when that cache has to be cleared because of a new\n> checkpoint cycle starting. We could advertise the current cycle\n> number in shared memory, but you'd still need to take a lock to\n> read it. (If we had memory fence primitives it could be a bit\n> cheaper, but I dunno how much.)\n\nWell, we do have those, as of 9.2. There not being used for anything\nyet, but I've been looking for an opportunity to put them into use.\nsinvaladt.c's msgnumLock is an obvious candidate, but the 9.2 changes\nto reduce the impact of sinval synchronization work sufficiently well\nthat I haven't been motivated to tinker with it any further. Maybe it\nwould be worth doing just to exercise that code, though.\n\nOr, maybe we can use them here. But after some thought I can't see\nexactly how we'd do it. Memory barriers prevent a value from being\nprefetched too early or written back to main memory too late, relative\nto other memory operations by the same process, but the definition of\n\"too early\" and \"too late\" is not quite clear to me here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 19 Jul 2012 12:17:12 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Jul 19, 2012 at 10:09 AM, Tom Lane <[email protected]> wrote:\n>> What about checking just the immediately previous entry? This would\n>> at least fix the problem for bulk-load situations, and the cost ought\n>> to be about negligible compared to acquiring the LWLock.\n\n> 2. You say \"fix the problem\" but I'm not exactly clear what problem\n> you think this fixes.\n\nWhat I'm concerned about is that there is going to be a great deal more\nfsync request queue traffic in 9.2 than there ever was before, as a\nconsequence of the bgwriter/checkpointer split. The design expectation\nfor this mechanism was that most fsync requests would be generated\nlocally inside the bgwriter and thus go straight into the hash table\nwithout having to go through the shared-memory queue. I admit that\nwe have seen no benchmarks showing that there's a problem, but that's\nbecause up till yesterday the bgwriter was failing to transmit such\nmessages at all. So I'm looking for ways to cut the overhead.\n\nBut having said that, maybe we should not panic until we actually see\nsome benchmarks showing the problem.\n\nMeanwhile, we do know there's a problem with FORGET_RELATION_FSYNC.\nI have been looking at the two-hash-tables design I suggested before,\nand realized that there's a timing issue: if we just stuff \"forget\"\nrequests into a separate table, there is no method for determining\nwhether a given fsync request arrived before or after a given forget\nrequest. This is problematic if the relfilenode gets recycled: we\nneed to be able to guarantee that a previously-posted forget request\nwon't cancel a valid fsync for the new relation. I believe this is\nsoluble though, if we merge the \"forget\" requests with unlink requests,\nbecause a relfilenode can't be recycled until we do the unlink.\nSo as far as the code goes:\n\n1. Convert the PendingUnlinkEntry linked list to a hash table keyed by\nRelFileNode. It acts the same as before, and shouldn't be materially\nslower to process, but now we can determine in O(1) time whether there\nis a pending unlink for a relfilenode.\n\n2. Treat the existence of a pending unlink request as a relation-wide\nfsync cancel; so the loop in mdsync needs one extra hashtable lookup\nto determine validity of a PendingOperationEntry. As before, this\nshould not matter much considering that we're about to do an fsync().\n\n3. Tweak mdunlink so that it does not send a FORGET_RELATION_FSYNC\nmessage if it is sending an UNLINK_RELATION_REQUEST. (A side benefit\nis that this gives us another 2X reduction in fsync queue traffic,\nand not just any queue traffic but the type of traffic that we must\nnot fail to queue.)\n\nThe FORGET_RELATION_FSYNC code path will still exist, and will still\nrequire a full hashtable scan, but we don't care because it isn't\nbeing used in common situations. It would only be needed for stuff\nlike killing an init fork.\n\nThe argument that this is safe involves these points:\n\n* mdunlink cannot send UNLINK_RELATION_REQUEST until it's done\nftruncate on the main fork's first segment, because otherwise that\nsegment could theoretically get unlinked from under it before it can do\nthe truncate. But this is okay since the ftruncate won't cause any\nfsync the checkpointer might concurrently be doing to fail. The\nrequest *will* be sent before we unlink any other files, so mdsync\nwill be able to recover if it gets an fsync failure due to concurrent\nunlink.\n\n* Because a relfilenode cannot be recycled until we process and delete\nthe PendingUnlinkEntry during mdpostckpt, it is not possible for valid\nnew fsync requests to arrive while the PendingUnlinkEntry still exists\nto cause them to be considered canceled.\n\n* Because we only process and delete PendingUnlinkEntrys that have been\nthere since before the checkpoint started, we can be sure that any\nPendingOperationEntrys referring to the relfilenode will have been\nscanned and deleted by mdsync before we remove the PendingUnlinkEntry.\n\nUnless somebody sees a hole in this logic, I'll go make this happen.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2012 14:57:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Thu, Jul 19, 2012 at 2:57 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Thu, Jul 19, 2012 at 10:09 AM, Tom Lane <[email protected]> wrote:\n>>> What about checking just the immediately previous entry? This would\n>>> at least fix the problem for bulk-load situations, and the cost ought\n>>> to be about negligible compared to acquiring the LWLock.\n>\n>> 2. You say \"fix the problem\" but I'm not exactly clear what problem\n>> you think this fixes.\n>\n> What I'm concerned about is that there is going to be a great deal more\n> fsync request queue traffic in 9.2 than there ever was before, as a\n> consequence of the bgwriter/checkpointer split. The design expectation\n> for this mechanism was that most fsync requests would be generated\n> locally inside the bgwriter and thus go straight into the hash table\n> without having to go through the shared-memory queue. I admit that\n> we have seen no benchmarks showing that there's a problem, but that's\n> because up till yesterday the bgwriter was failing to transmit such\n> messages at all. So I'm looking for ways to cut the overhead.\n>\n> But having said that, maybe we should not panic until we actually see\n> some benchmarks showing the problem.\n\n+1 for not panicking. I'm prepared to believe that there could be a\nproblem here, but I'm not prepared to believe that we've characterized\nit well enough to be certain that any changes we choose to make will\nmake things better not worse.\n\n> Meanwhile, we do know there's a problem with FORGET_RELATION_FSYNC.\n> I have been looking at the two-hash-tables design I suggested before,\n> and realized that there's a timing issue: if we just stuff \"forget\"\n> requests into a separate table, there is no method for determining\n> whether a given fsync request arrived before or after a given forget\n> request. This is problematic if the relfilenode gets recycled: we\n> need to be able to guarantee that a previously-posted forget request\n> won't cancel a valid fsync for the new relation. I believe this is\n> soluble though, if we merge the \"forget\" requests with unlink requests,\n> because a relfilenode can't be recycled until we do the unlink.\n> So as far as the code goes:\n>\n> 1. Convert the PendingUnlinkEntry linked list to a hash table keyed by\n> RelFileNode. It acts the same as before, and shouldn't be materially\n> slower to process, but now we can determine in O(1) time whether there\n> is a pending unlink for a relfilenode.\n>\n> 2. Treat the existence of a pending unlink request as a relation-wide\n> fsync cancel; so the loop in mdsync needs one extra hashtable lookup\n> to determine validity of a PendingOperationEntry. As before, this\n> should not matter much considering that we're about to do an fsync().\n>\n> 3. Tweak mdunlink so that it does not send a FORGET_RELATION_FSYNC\n> message if it is sending an UNLINK_RELATION_REQUEST. (A side benefit\n> is that this gives us another 2X reduction in fsync queue traffic,\n> and not just any queue traffic but the type of traffic that we must\n> not fail to queue.)\n>\n> The FORGET_RELATION_FSYNC code path will still exist, and will still\n> require a full hashtable scan, but we don't care because it isn't\n> being used in common situations. It would only be needed for stuff\n> like killing an init fork.\n>\n> The argument that this is safe involves these points:\n>\n> * mdunlink cannot send UNLINK_RELATION_REQUEST until it's done\n> ftruncate on the main fork's first segment, because otherwise that\n> segment could theoretically get unlinked from under it before it can do\n> the truncate. But this is okay since the ftruncate won't cause any\n> fsync the checkpointer might concurrently be doing to fail. The\n> request *will* be sent before we unlink any other files, so mdsync\n> will be able to recover if it gets an fsync failure due to concurrent\n> unlink.\n>\n> * Because a relfilenode cannot be recycled until we process and delete\n> the PendingUnlinkEntry during mdpostckpt, it is not possible for valid\n> new fsync requests to arrive while the PendingUnlinkEntry still exists\n> to cause them to be considered canceled.\n>\n> * Because we only process and delete PendingUnlinkEntrys that have been\n> there since before the checkpoint started, we can be sure that any\n> PendingOperationEntrys referring to the relfilenode will have been\n> scanned and deleted by mdsync before we remove the PendingUnlinkEntry.\n>\n> Unless somebody sees a hole in this logic, I'll go make this happen.\n\nWhat if we change the hash table to have RelFileNode as the key and an\narray of MAX_FORKNUM bitmapsets as the value? Then when you get a\n\"forget\" request, you can just zap all the sets to empty. That seems\na whole lot simpler than your proposal and I don't see any real\ndownside. I can't actually poke a whole in your logic at the moment\nbut a simpler system that requires no assumptions about filesystem\nbehavior seems preferable to me.\n\nYou can still make an unlink request imply a corresponding\nforget-request if you want, but now that's a separate optimization.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 19 Jul 2012 16:03:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> What if we change the hash table to have RelFileNode as the key and an\n> array of MAX_FORKNUM bitmapsets as the value? Then when you get a\n> \"forget\" request, you can just zap all the sets to empty.\n\nHm ... the only argument I can really make against that is that there'll\nbe no way to move such a table into shared memory; but there's probably\nlittle hope of that anyway, given points made upthread. The bitmapset\nmanipulations are a bit tricky but solvable, and I agree there's\nsomething to be said for not tying this stuff so closely to the\nmechanism for relfilenode recycling.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Jul 2012 17:02:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE vs TRUNCATE explanation"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 06:56:50PM -0400, Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n> > On Mon, Jul 16, 2012 at 3:18 PM, Tom Lane <[email protected]> wrote:\n> >> BTW, while we are on the subject: hasn't this split completely\n> >> broken the statistics about backend-initiated writes?\n> \n> > Yes, it seems to have done just that.\n> \n> This implies that nobody has done pull-the-plug testing on either\n> HEAD or 9.2 since the checkpointer split went in (2011-11-01),\n> because even a modicum of such testing would surely have shown that\n> we're failing to fsync a significant fraction of our write traffic.\n> \n> Furthermore, I would say that any performance testing done since\n> then, if it wasn't looking at purely read-only scenarios, isn't\n> worth the electrons it's written on. In particular, any performance\n> gain that anybody might have attributed to the checkpointer splitup\n> is very probably hogwash.\n> \n> This is not giving me a warm feeling about our testing practices.\n\nIs there any part of this that the buildfarm, or some other automation\nframework, might be able to handle?\n\nCheers,\nDavid.\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nPhone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter\nSkype: davidfetter XMPP: [email protected]\niCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n",
"msg_date": "Sun, 22 Jul 2012 21:37:33 -0700",
"msg_from": "David Fetter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "\nOn 07/23/2012 12:37 AM, David Fetter wrote:\n> On Tue, Jul 17, 2012 at 06:56:50PM -0400, Tom Lane wrote:\n>> Robert Haas <[email protected]> writes:\n>>> On Mon, Jul 16, 2012 at 3:18 PM, Tom Lane <[email protected]> wrote:\n>>>> BTW, while we are on the subject: hasn't this split completely\n>>>> broken the statistics about backend-initiated writes?\n>>> Yes, it seems to have done just that.\n>> This implies that nobody has done pull-the-plug testing on either\n>> HEAD or 9.2 since the checkpointer split went in (2011-11-01),\n>> because even a modicum of such testing would surely have shown that\n>> we're failing to fsync a significant fraction of our write traffic.\n>>\n>> Furthermore, I would say that any performance testing done since\n>> then, if it wasn't looking at purely read-only scenarios, isn't\n>> worth the electrons it's written on. In particular, any performance\n>> gain that anybody might have attributed to the checkpointer splitup\n>> is very probably hogwash.\n>>\n>> This is not giving me a warm feeling about our testing practices.\n> Is there any part of this that the buildfarm, or some other automation\n> framework, might be able to handle?\n>\n\nI'm not sure how you automate testing a pull-the-plug scenario.\n\nThe buildfarm is not at all designed to test performance. That's why we \nwant a performance farm.\n\ncheers\n\nandrew\n",
"msg_date": "Mon, 23 Jul 2012 08:29:16 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On Mon, Jul 23, 2012 at 08:29:16AM -0400, Andrew Dunstan wrote:\n> \n> On 07/23/2012 12:37 AM, David Fetter wrote:\n> >On Tue, Jul 17, 2012 at 06:56:50PM -0400, Tom Lane wrote:\n> >>Robert Haas <[email protected]> writes:\n> >>>On Mon, Jul 16, 2012 at 3:18 PM, Tom Lane <[email protected]> wrote:\n> >>>>BTW, while we are on the subject: hasn't this split completely\n> >>>>broken the statistics about backend-initiated writes?\n> >>>Yes, it seems to have done just that.\n> >>This implies that nobody has done pull-the-plug testing on either\n> >>HEAD or 9.2 since the checkpointer split went in (2011-11-01),\n> >>because even a modicum of such testing would surely have shown that\n> >>we're failing to fsync a significant fraction of our write traffic.\n> >>\n> >>Furthermore, I would say that any performance testing done since\n> >>then, if it wasn't looking at purely read-only scenarios, isn't\n> >>worth the electrons it's written on. In particular, any performance\n> >>gain that anybody might have attributed to the checkpointer splitup\n> >>is very probably hogwash.\n> >>\n> >>This is not giving me a warm feeling about our testing practices.\n> >Is there any part of this that the buildfarm, or some other automation\n> >framework, might be able to handle?\n> >\n> \n> I'm not sure how you automate testing a pull-the-plug scenario.\n\nI have a dim memory of how the FreeBSD project was alleged to have\ndone it, namely by rigging a serial port (yes, it was that long ago)\nto the power supply of another machine and randomly cycling the power.\n\n> The buildfarm is not at all designed to test performance. That's why\n> we want a performance farm.\n\nRight. Apart from hardware, what are we stalled on?\n\nCheers,\nDavid.\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nPhone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter\nSkype: davidfetter XMPP: [email protected]\niCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n",
"msg_date": "Mon, 23 Jul 2012 05:41:23 -0700",
"msg_from": "David Fetter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "\nOn 07/23/2012 08:41 AM, David Fetter wrote:\n>> The buildfarm is not at all designed to test performance. That's why\n>> we want a performance farm.\n> Right. Apart from hardware, what are we stalled on?\n>\n\nSoftware :-)\n\nI am trying to find some cycles to get something going.\n\ncheers\n\nandrew\n\n\n",
"msg_date": "Mon, 23 Jul 2012 08:56:38 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On 07/23/2012 08:29 PM, Andrew Dunstan wrote:\n\n> I'm not sure how you automate testing a pull-the-plug scenario.\n\nfire up kvm or qemu instances, then kill 'em.\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 23 Jul 2012 21:04:46 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "\nOn 07/23/2012 09:04 AM, Craig Ringer wrote:\n> On 07/23/2012 08:29 PM, Andrew Dunstan wrote:\n>\n>> I'm not sure how you automate testing a pull-the-plug scenario.\n>\n> fire up kvm or qemu instances, then kill 'em.\n>\n>\n\nYeah, maybe. Knowing just when to kill them might be an interesting \nquestion.\n\nI'm also unsure how much nice cleanup the host supervisor does in such \ncases. VMs are wonderful things, but they aren't always the answer. I'm \nnot saying they aren't here, just wondering.\n\ncheers\n\nandrew\n\n\n",
"msg_date": "Mon, 23 Jul 2012 09:47:16 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On 07/23/2012 09:47 PM, Andrew Dunstan wrote:\n>\n> On 07/23/2012 09:04 AM, Craig Ringer wrote:\n>> On 07/23/2012 08:29 PM, Andrew Dunstan wrote:\n>>\n>>> I'm not sure how you automate testing a pull-the-plug scenario.\n>>\n>> fire up kvm or qemu instances, then kill 'em.\n>>\n>>\n>\n> Yeah, maybe. Knowing just when to kill them might be an interesting \n> question.\n>\n> I'm also unsure how much nice cleanup the host supervisor does in such \n> cases. VMs are wonderful things, but they aren't always the answer. \n> I'm not saying they aren't here, just wondering.\nI've done some testing with this, and what it boils down to is that any \ndata that made it to the virtual disk is persistent after a VM kill. \nAnything in dirty buffers on the VM guest is lost. It's a very close \nmatch for real hardware. I haven't tried to examine the details of the \nhandling of virtualised disk hardware write caches, but disks should be \nin write-through mode anyway. A `kill -9` will clear 'em for sure, \nanyway, as the guest has no chance to do any cleanup.\n\nOne of the great things about kvm and qemu for this sort of testing is \nthat it's just another program. There's very little magic, and it's \nquite easy to test and trace.\n\nI have a qemu/kvm test harness I've been using for another project that \nI need to update and clean up as it'd be handy for this. It's just a \nmatter of making the time, as it's been a busy few days.\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 23 Jul 2012 21:58:47 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On Mon, Jul 23, 2012 at 5:41 AM, David Fetter <[email protected]> wrote:\n> On Mon, Jul 23, 2012 at 08:29:16AM -0400, Andrew Dunstan wrote:\n>>\n>>\n>> I'm not sure how you automate testing a pull-the-plug scenario.\n>\n> I have a dim memory of how the FreeBSD project was alleged to have\n> done it, namely by rigging a serial port (yes, it was that long ago)\n> to the power supply of another machine and randomly cycling the power.\n\nOn Linux,\n\necho b > /proc/sysrq-trigger\n\nIs supposed to take it down instantly, with no flushing of dirty buffers.\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 23 Jul 2012 08:02:51 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was\n\tRe: DELETE vs TRUNCATE explanation)"
},
{
"msg_contents": "On Wed, Jul 18, 2012 at 1:13 AM, Craig Ringer <[email protected]> wrote:\n\n> That makes me wonder if on top of the buildfarm, extending some buildfarm\n> machines into a \"crashfarm\" is needed:\n>\n> - Keep kvm instances with copy-on-write snapshot disks and the build env\n> on them\n> - Fire up the VM, do a build, and start the server\n> - From outside the vm have the test controller connect to the server and\n> start a test run\n> - Hard-kill the OS instance at a random point in time.\n>\n\nFor what it's worth you don't need to do a hard kill of the vm and start\nover repeatedly to kill at different times. You could take a snapshot of\nthe disk storage and keep running. You could take many snapshots from a\nsingle run. Each snapshot would represent the storage that would exist if\nthe machine had crashed at the point in time that the snapshot was taken.\n\nYou do want the snapshots to be taken using something outside the virtual\nmachine. Either the kvm storage layer or using lvm on the host. But not\nusing lvm on the guest virtual machine.\n\nAnd yes, the hard part that always stopped me from looking at this was\nhaving any way to test the correctness of the data.\n\n-- \ngreg\n\nOn Wed, Jul 18, 2012 at 1:13 AM, Craig Ringer <[email protected]> wrote:\nThat makes me wonder if on top of the buildfarm, extending some buildfarm machines into a \"crashfarm\" is needed:\n\n- Keep kvm instances with copy-on-write snapshot disks and the build env on them\n- Fire up the VM, do a build, and start the server\n- From outside the vm have the test controller connect to the server and start a test run\n- Hard-kill the OS instance at a random point in time.For what it's worth you don't need to do a hard kill of the vm and start over repeatedly to kill at different times. You could take a snapshot of the disk storage and keep running. You could take many snapshots from a single run. Each snapshot would represent the storage that would exist if the machine had crashed at the point in time that the snapshot was taken.\nYou do want the snapshots to be taken using something outside the virtual machine. Either the kvm storage layer or using lvm on the host. But not using lvm on the guest virtual machine.\nAnd yes, the hard part that always stopped me from looking at this was having any way to test the correctness of the data.-- greg",
"msg_date": "Tue, 24 Jul 2012 14:33:26 +0100",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Checkpointer split has broken things dramatically (was Re: DELETE\n\tvs TRUNCATE explanation)"
},
{
"msg_contents": "On Thu, Jul 12, 2012 at 4:21 PM, Harold A. Giménez\n<[email protected]> wrote:\n> Hi,\n>\n> I work with Daniel Farina and was the other engineer who \"discovered\" this,\n> once again. That is, I got bit by it and have been running TRUNCATE on my\n> test suites for years.\n\nHi Daniel and Harold,\n\nI don't know if you followed this thread over into the -hacker mailing list.\n\nThere was some bookkeeping code that was N^2 in the number of\ntruncations performed during any given checkpoint cycle. That has\nbeen fixed in 9.2Beta3.\n\nI suspect that this was the root cause of the problem you encountered.\n\nIf you are in a position to retest using 9.2Beta3\n(http://www.postgresql.org/about/news/1405/), I'd be interested to\nknow if it does make truncations comparable in speed to unqualified\ndeletes.\n\nThanks,\n\nJeff\n",
"msg_date": "Thu, 9 Aug 2012 11:06:54 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE vs TRUNCATE explanation"
}
] |
[
{
"msg_contents": "Version.....\nPostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2, 64-bit\n \nServer.....\nServer: RX800 S2 (8 x Xeon 7040 3GHz dual-core processors, 32GB memory\nO/S: SLES11 SP1 64-bit\n \nScenario.....\nLegacy application with bespoke but very efficient interface to its persistent data. We're looking to replace the application and use \nPostgreSQL to hold the data. Performance measures on the legacy application on the same server shows that it can perform a particular read operation in ~215 microseconds (averaged) which includes processing the request and getting the result out.\n \nQuestion......\nI've written an Immutable stored procedure that takes no parameters and returns a fixed value to try and determine the round trip overhead of a query to PostgreSQL. Call to sp is made using libpq. We're all local and using UNIX domain sockets.\n \nClient measures are suggesting ~150-200 microseconds to call sp and get the answer back\n \nping to loopback returns in ~20 microseconds (I assume domain sockets are equivalent).\n \nstrace of server process I think confirms time at server to be ~150-200 microsecs. For example:\n11:17:50.109936 recvfrom(6, \"P\\0\\0\\0'\\0SELECT * FROM sp_select_no\"..., 8192, 0, NULL, NULL) = 77 <0.000018>\n11:17:50.110098 sendto(6, \"1\\0\\0\\0\\0042\\0\\0\\0\\4T\\0\\0\\0(\\0\\1sp_select_no_op\"..., 86, 0, NULL, 0) = 86 <0.000034>\n\nSo it looks like a local no-op overhead of at least 150 microseconds which would leave us struggling. \nCould someone please let me know if this is usual and if so where the time's spent? \nShort of getting a faster server, is there anything I can do to influence this?\n \nThanks, \nAndy \n \t\t \t \t\t \n\n\n\n\nVersion.....PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2, 64-bit\n \nServer.....Server: RX800 S2 (8 x Xeon 7040 3GHz dual-core processors, 32GB memoryO/S: SLES11 SP1 64-bit\n \nScenario.....Legacy application with bespoke but very efficient interface to its persistent data. We're looking to replace the application and use \nPostgreSQL to hold the data. Performance measures on the legacy application on the same server shows that it can perform a particular read operation in ~215 microseconds (averaged) which includes processing the request and getting the result out.\n \nQuestion......I've written an Immutable stored procedure that takes no parameters and returns a fixed value to try and determine the round trip overhead of a query to PostgreSQL. Call to sp is made using libpq. We're all local and using UNIX domain sockets.\n \nClient measures are suggesting ~150-200 microseconds to call sp and get the answer back\n \nping to loopback returns in ~20 microseconds (I assume domain sockets are equivalent).\n \nstrace of server process I think confirms time at server to be ~150-200 microsecs. For example:\n11:17:50.109936 recvfrom(6, \"P\\0\\0\\0'\\0SELECT * FROM sp_select_no\"..., 8192, 0, NULL, NULL) = 77 <0.000018>11:17:50.110098 sendto(6, \"1\\0\\0\\0\\0042\\0\\0\\0\\4T\\0\\0\\0(\\0\\1sp_select_no_op\"..., 86, 0, NULL, 0) = 86 <0.000034>\nSo it looks like a local no-op overhead of at least 150 microseconds which would leave us struggling. \nCould someone please let me know if this is usual and if so where the time's spent? \nShort of getting a faster server, is there anything I can do to influence this?\n \nThanks, Andy",
"msg_date": "Wed, 11 Jul 2012 11:46:23 +0000",
"msg_from": "Andy Halsall <[email protected]>",
"msg_from_op": true,
"msg_subject": "query overhead"
},
{
"msg_contents": "Andy Halsall <[email protected]> writes:\n> I've written an Immutable stored procedure that takes no parameters and returns a fixed value to try and determine the round trip overhead of a query to PostgreSQL. Call to sp is made using libpq. We're all local and using UNIX domain sockets.\n \n> Client measures are suggesting ~150-200 microseconds to call sp and get the answer back\n\nThat doesn't sound out of line for what you're doing, which appears to\ninclude parsing/planning a SELECT command. Some of that overhead could\nprobably be avoided by using a prepared statement instead of a plain\nquery. Or you could try using the \"fast path\" API (see libpq's PQfn)\nto invoke the function directly without any SQL query involved.\n\nReally, however, the way to make things fly is to get rid of the round\ntrip overhead in the first place by migrating more of your application\nlogic into the stored procedure. I realize that that might require\npretty significant rewrites, but if you can't tolerate per-query\noverheads in the 100+ usec range, that's where you're going to end up.\n\nIf you don't like any of those answers, maybe Postgres isn't the\nsolution for you. You might consider an embeddable database such\nas SQLLite.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Jul 2012 12:15:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query overhead"
},
{
"msg_contents": "On 07/11/2012 07:46 PM, Andy Halsall wrote:\n>\n> I've written an Immutable stored procedure that takes no parameters \n> and returns a fixed value to try and determine the round trip overhead \n> of a query to PostgreSQL. Call to sp is made using libpq. We're all \n> local and using UNIX domain sockets.\n>\nPL/PgSQL or SQL stored proc? There's a definite calling overhead for \nPL/PgSQL compared to plain SQL functions. SQL functions in turn cost \nmore than a direct statement.\n\nThese costs aren't big. They're massively outweighed by any kind of disk \naccess or any non-trivial query. They start to add up if you have a lot \nof procs that wrap a simple \"SELECT * FROM x WHERE x.id = $1\" though.\n\n> Client measures are suggesting ~150-200 microseconds to call sp and \n> get the answer back\n0.0015 to 0.002 milliseconds?\n\nThat's ... um ... fast. Presumably that's during a loop where your no-op \nis run repeatedly without connection setup costs, etc.\n\n>\n> ping to loopback returns in ~20 microseconds (I assume domain sockets \n> are equivalent).\nUNIX domain sockets are typically at least as fast and somewhat lower \noverhead.\n\n> So it looks like a local no-op overhead of at least 150 microseconds \n> which would leave us struggling.\n> Could someone please let me know if this is usual and if so where the \n> time's spent?\n> Short of getting a faster server, is there anything I can do to \n> influence this?\n\nI'm not sure how much a faster server would help with single query \nresponse time. It'll help with response to many parallel queries, but \nmay not speed up a single query, especially a tiny lightweight one, \nparticularly dramatically.\n\nThe Xeon 7040:\nhttp://ark.intel.com/products/27226/Intel-Xeon-Processor-7040-(4M-Cache-3_00-GHz-667-MHz-FSB) \n<http://ark.intel.com/products/27226/Intel-Xeon-Processor-7040-%284M-Cache-3_00-GHz-667-MHz-FSB%29>\nis not the newest beast out there, but it's not exactly slow.\n\nHonestly, PostgreSQL's focus is on performance with bigger units of \nwork, not on being able to return a response to a tiny query in \nmicroseconds. If you are converting an application that has microsecond \nresponse time requirements and hammers its database with millions of \ntiny queries, PostgreSQL is probably not going to be your best bet.\n\nIf you're able to adapt the app to use set-oriented access patterns \ninstead of looping, eg instead of (pseudocode):\n\ncustomer_ids = [ ... array from somewhere ... ]\nfor customer_id in ids:\n c = SELECT c FROM customer c WHERE customer_id = :id\n if c.some_var:\n UPDATE customer SET c.some_other_var = 't'\n\nyou can do:\n\nUPDATE customer SET c.some_other_var = [expression] WHERE [expression]\n\nthen you'll get much better results from Pg.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 07/11/2012 07:46 PM, Andy Halsall\n wrote:\n\n\n\n\n\n I've written an Immutable stored procedure that takes no\n parameters and returns a fixed value to try and determine the\n round trip overhead of a query to PostgreSQL. Call to sp is made\n using libpq. We're all local and using UNIX domain sockets.\n\n\n\n PL/PgSQL or SQL stored proc? There's a definite calling overhead for\n PL/PgSQL compared to plain SQL functions. SQL functions in turn cost\n more than a direct statement.\n\n These costs aren't big. They're massively outweighed by any kind of\n disk access or any non-trivial query. They start to add up if you\n have a lot of procs that wrap a simple \"SELECT * FROM x WHERE x.id =\n $1\" though.\n\n\n\n Client measures are suggesting ~150-200 microseconds to call sp\n and get the answer back\n\n\n 0.0015 to 0.002 milliseconds?\n\n That's ... um ... fast. Presumably that's during a loop where your\n no-op is run repeatedly without connection setup costs, etc.\n\n\n\n \n ping to loopback returns in ~20 microseconds (I assume domain\n sockets are equivalent).\n\n\n UNIX domain sockets are typically at least as fast and somewhat\n lower overhead.\n\n\nSo it looks like a local no-op overhead of at least 150\n microseconds which would leave us struggling. \n Could someone please let me know if this is usual and if so\n where the time's spent? \n Short of getting a faster server, is there anything I can do to\n influence this?\n\n\n\n I'm not sure how much a faster server would help with single query\n response time. It'll help with response to many parallel queries,\n but may not speed up a single query, especially a tiny lightweight\n one, particularly dramatically.\n\n The Xeon 7040:\n \n \nhttp://ark.intel.com/products/27226/Intel-Xeon-Processor-7040-(4M-Cache-3_00-GHz-667-MHz-FSB)\n is not the newest beast out there, but it's not exactly slow.\n\n Honestly, PostgreSQL's focus is on performance with bigger units of\n work, not on being able to return a response to a tiny query in\n microseconds. If you are converting an application that has\n microsecond response time requirements and hammers its database with\n millions of tiny queries, PostgreSQL is probably not going to be\n your best bet.\n\n If you're able to adapt the app to use set-oriented access patterns\n instead of looping, eg instead of (pseudocode):\n\n customer_ids = [ ... array from somewhere ... ]\n for customer_id in ids:\n c = SELECT c FROM customer c WHERE customer_id = :id\n if c.some_var:\n UPDATE customer SET c.some_other_var = 't'\n\n you can do:\n\n UPDATE customer SET c.some_other_var = [expression] WHERE\n [expression]\n\n then you'll get much better results from Pg.\n\n --\n Craig Ringer",
"msg_date": "Sat, 14 Jul 2012 11:28:26 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query overhead"
},
{
"msg_contents": "On 07/16/2012 06:13 PM, Andy Halsall wrote:\n> Thanks for the responses. I take the points - the times we're dealing \n> with are very small. Sorry but I'm a bit confused by the suggestions \n> around function types / prepared statements, but probably haven't been \n> clear in my question: I'm invoking a PL/PgSQL function from libpq, for \n> example the no_op mentioned in first post does:\n>\n> CREATE OR REPLACE FUNCTION sp_select_no_op() RETURNS integer AS\n> '\n> begin\n> return 1;\n> end\n> '\n> language 'plpgsql' IMMUTABLE;\n>\n> My understanding was that the plan for this would be prepared once and \n> reused. So no addvantage in a prepared statement? Also no advantage in \n> making this a plain SQL function as these don't get cached?\nAFAIK SQL functions don't get cached plans - though I'm not 100% on \nthis. They can be lots cheaper for wrapping simple operations, though.\n\nI'm just questioning why you're going immediately to PL/PgSQL - or \nstored procs at all. It might be a bigger hammer than you need.\n\nWhat sorts of operations will your application be performing? Is there \nany reason it can't directly use simple INSERT, UPDATE, DELETE and \nSELECT statements, possibly with PREPARE and EXECUTE at libpq level?\n\nIf you're trying to go as fast as humanly possible in emulating an \nISAM-like access model with lots of small fast accesses, PQprepare of \nsimple S/I/U/D statements, then proper use of PQexecPrepared, is likely \nto be hard to beat.\n\nIf you're working with ISAM-like access though, cursors may well be very \nhelpful for you. It's a pity for your app that Pg doesn't support \ncursors that see changes committed after cursor creation, since these \nare ideal when emulating ISAM \"next record\" / \"previous record\" access \nmodels. They're still suitable for tasks where you know the app doesn't \nneed to see concurrently modified data, though.\n\nCan you show a typical sequence of operations for your DB?\n\nAlso, out of interest, are you migrating from a traditional shared-file \nISAM-derived database system, or something funkier?\n\n>\n> Embedded database such as SQLLite is a good idea except that we'll be \n> multi-process and my understanding is that they lock the full database \n> on any write, which is off-putting.\n\nWrite concurrency in SQLite is miserable, yeah, but you get very fast \nshared-access reads as a trade-off and it's much closer to your app's \nold DB design. It depends a lot on your workload.\n\n--\nCraig Ringer\n\n\n\n\n\n\n\nOn 07/16/2012 06:13 PM, Andy Halsall\n wrote:\n\n\n\n\n Thanks for the responses. I take the points - the times we're\n dealing with are very small. Sorry but I'm a bit confused by the\n suggestions around function types / prepared statements, but\n probably haven't been clear in my question: I'm invoking a\n PL/PgSQL function from libpq, for example the no_op mentioned in\n first post does:\n \n CREATE OR REPLACE FUNCTION sp_select_no_op() RETURNS integer\n AS\n '\n begin\n return 1;\n end\n '\n language 'plpgsql' IMMUTABLE;\n \n My understanding was that the plan for this would be prepared\n once and reused. So no addvantage in a prepared statement? Also\n no advantage in making this a plain SQL function as these don't\n get cached?\n\n\n AFAIK SQL functions don't get cached plans - though I'm not 100% on\n this. They can be lots cheaper for wrapping simple operations,\n though.\n\n I'm just questioning why you're going immediately to PL/PgSQL - or\n stored procs at all. It might be a bigger hammer than you need.\n\n What sorts of operations will your application be performing? Is\n there any reason it can't directly use simple INSERT, UPDATE, DELETE\n and SELECT statements, possibly with PREPARE and EXECUTE at libpq\n level?\n\n If you're trying to go as fast as humanly possible in emulating an\n ISAM-like access model with lots of small fast accesses, PQprepare\n of simple S/I/U/D statements, then proper use of PQexecPrepared, is\n likely to be hard to beat.\n\n If you're working with ISAM-like access though, cursors may well be\n very helpful for you. It's a pity for your app that Pg doesn't\n support cursors that see changes committed after cursor creation,\n since these are ideal when emulating ISAM \"next record\" / \"previous\n record\" access models. They're still suitable for tasks where you\n know the app doesn't need to see concurrently modified data, though.\n\n Can you show a typical sequence of operations for your DB?\n\n Also, out of interest, are you migrating from a traditional\n shared-file ISAM-derived database system, or something funkier?\n\n\n\n \n Embedded database such as SQLLite is a good idea except that\n we'll be multi-process and my understanding is that they lock\n the full database on any write, which is off-putting.\n\n\n Write concurrency in SQLite is miserable, yeah, but you get very\n fast shared-access reads as a trade-off and it's much closer to your\n app's old DB design. It depends a lot on your workload.\n\n --\n Craig Ringer",
"msg_date": "Mon, 16 Jul 2012 22:43:57 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query overhead"
},
{
"msg_contents": "On 07/17/2012 11:33 PM, Andy Halsall wrote:\n\n>\n> If you're working with ISAM-like access though, cursors may well be \n> very helpful for you. It's a pity for your app that Pg doesn't support \n> cursors that see changes committed after cursor creation, since these \n> are ideal when emulating ISAM \"next record\" / \"previous record\" access \n> models. They're still suitable for tasks where you know the app \n> doesn't need to see concurrently modified data, though.\n>\n> > That's right, that would've been ideal behaviour for us. We're going \n> to manage our own shared cache in the application layer to give \n> similar functionality. We have lots of reads but fewer writes.\n\nHow have you gone with this? I'm curious.\n\nBy the way, when replying it's the convention to indent the text written \nby the person you're replying to, not indent your own text. It's kind of \nhard to read.\n\n\n> > In the context of what we've been talking about, we're reading a set \n> of information which is ordered in a reasonably complex way. Set is \n> about 10000 records and requires a table join. This sort takes a while \n> as it heap scans - couldn't persuade it to use indexes.\n>\n> > Having read the set, the application \"gets next\" until the end. To \n> start with we were re-establishing the set (minus the previous record) \n> and choosing the first (LIMIT 1) on each \"get next\" - obviously a \n> non-starter. We moved to caching the record keys for the set and only \n> visiting the database for the specific records on each \"get next\" - \n> hence the questions about round trip overhead for small queries.\nGiven that pattern, why aren't you using a cursor? Do you need to see \nconcurrent changes? Is the cursor just held open too long, affecting \nautovacum?\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Sat, 21 Jul 2012 16:30:44 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query overhead"
},
{
"msg_contents": "On Wed, Jul 11, 2012 at 4:46 AM, Andy Halsall <[email protected]> wrote:\n> Version.....\n> PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2,\n> 64-bit\n>\n> Server.....\n> Server: RX800 S2 (8 x Xeon 7040 3GHz dual-core processors, 32GB memory\n> O/S: SLES11 SP1 64-bit\n\nI don't really know how to compare these, but I've got:\n\nIntel(R) Core(TM)2 CPU 6400 @ 2.13GHz\n\n>\n> Scenario.....\n> Legacy application with bespoke but very efficient interface to its\n> persistent data. We're looking to replace the application and use\n> PostgreSQL to hold the data. Performance measures on the legacy application\n> on the same server shows that it can perform a particular read operation in\n> ~215 microseconds (averaged) which includes processing the request and\n> getting the result out.\n>\n> Question......\n> I've written an Immutable stored procedure that takes no parameters and\n> returns a fixed value to try and determine the round trip overhead of a\n> query to PostgreSQL. Call to sp is made using libpq. We're all local and\n> using UNIX domain sockets.\n>\n> Client measures are suggesting ~150-200 microseconds to call sp and get the\n> answer back\n\nusing the plpgsql function you provided down thread:\n\ncat dummy2.sql\nselect sp_select_no_op();\n\npgbench -f dummy2.sql -T300\ntps = 18703.309132 (excluding connections establishing)\n\nSo that comes out to 53.5 microseconds/call.\n\nIf I use a prepared statement:\n\npgbench -M prepared -f dummy2.sql -T300\ntps = 30289.908144 (excluding connections establishing)\n\nor 33 us/call.\n\nSo unless your server is a lot slower than mine, I think your client\nmay be the bottleneck. What is your client program? what does \"top\"\nshow as the relative CPU usage of your client program vs the \"postgres\n... [local]\" program to which it is connected?\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 10 Aug 2012 11:19:56 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query overhead"
}
] |
[
{
"msg_contents": "Hi Fellows\n\nI have a question regarding PostgreSQL 9.1 indexing. \n\nI am having a table and want to create a index for a column and I want to\nstore the data with time zone for that column. The questions are:\n\n1. Can I create a index for a column which store time stamp with time zone.\nIf can is there ant performance issues?\n\n2. Also I can store the time stamp value with zone as a long integer value.\nIf so what is the difference between the above step. Which one is better.\n\nMany Thanks.\n\nRoshan\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/PostgreSQL-index-issue-tp5716322.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 11 Jul 2012 17:03:39 -0700 (PDT)",
"msg_from": "codevally <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL index issue"
},
{
"msg_contents": "codevally wrote:\n> I have a question regarding PostgreSQL 9.1 indexing.\n> \n> I am having a table and want to create a index for a column and I want\nto\n> store the data with time zone for that column. The questions are:\n> \n> 1. Can I create a index for a column which store time stamp with time\nzone.\n> If can is there ant performance issues?\n> \n> 2. Also I can store the time stamp value with zone as a long integer\nvalue.\n> If so what is the difference between the above step. Which one is\nbetter.\n> \n> Many Thanks.\n\nIf you didn't like the answer that you got to exactly the same\nquestion on\nhttp://archives.postgresql.org/message-id/CAOokBJGA56tiyLZGPf859fmLiZidp\np19Q5pPbT65Hwc4wORegg%40mail.gmail.com\nwhy didn't you say so?\n\nYours,\nLaurenz Albe\n",
"msg_date": "Mon, 16 Jul 2012 09:44:31 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL index issue"
}
] |
[
{
"msg_contents": "I think my original post here might have gotten caught in a spamtrap, so re-trying, I apologize if it ends\nup being a duplicate.\n\nI also forgot to mention that I'm on PG9.1.1 / RHEL 6.2 x64\n\nI believe this is the reason for the behavior i was seeing in this post as well.\nhttp://archives.postgresql.org/pgsql-performance/2012-07/msg00035.php\n\n---\nPeriodically when restarting my database I find that my PREAPRE time goes through the roof.\n\nThis query usually runs in a few ms total. After a recent database restart I find that it's up to 8 seconds\nconsistantly just to PREPARE.\n\nEven EXPLAIN ends up taking time.\n\npsql -f tt.sql a\nPager usage is off.\nTiming is on.\nPREPARE\nTime: 7965.808 ms\n[...]\n(1 row)\nTime: 1.147 ms\n\n\nI did an strace on the backend and saw the below.. it seems like there is a problem with grabbing\na semaphore?\n\nstrace -p 2826\nProcess 2826 attached - interrupt to quit\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\n[snip 1000s of lines]\nselect(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout)\n[snip 1000s of lines]\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(21069900, {{10, 1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nsemop(20709441, {{5, -1, 0}}, 1) = 0\nbrk(0x1207000) = 0x1207000\nbrk(0x1229000) = 0x1229000\nbrk(0x124a000) = 0x124a000\nmmap(NULL, 135168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2ac9c6b95000\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 934, MSG_NOSIGNAL, NULL, 0) = -1 ENOTCONN (Transport endpoint is not connected)\nclose(8) = 0\nsocket(PF_FILE, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 8\nconnect(8, {sa_family=AF_FILE, path=\"/dev/log\"}, 110) = 0\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 934, MSG_NOSIGNAL, NULL, 0) = 934\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 61, MSG_NOSIGNAL, NULL, 0) = 61\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 89, MSG_NOSIGNAL, NULL, 0) = 89\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 172, MSG_NOSIGNAL, NULL, 0) = 172\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 96, MSG_NOSIGNAL, NULL, 0) = 96\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 157, MSG_NOSIGNAL, NULL, 0) = 157\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 106, MSG_NOSIGNAL, NULL, 0) = 106\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 185, MSG_NOSIGNAL, NULL, 0) = 185\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 93, MSG_NOSIGNAL, NULL, 0) = 93\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 143, MSG_NOSIGNAL, NULL, 0) = 143\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 176, MSG_NOSIGNAL, NULL, 0) = 176\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 168, MSG_NOSIGNAL, NULL, 0) = 168\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 53, MSG_NOSIGNAL, NULL, 0) = 53\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 100, MSG_NOSIGNAL, NULL, 0) = 100\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 96, MSG_NOSIGNAL, NULL, 0) = 96\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 96, MSG_NOSIGNAL, NULL, 0) = 96\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 160, MSG_NOSIGNAL, NULL, 0) = 160\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 98, MSG_NOSIGNAL, NULL, 0) = 98\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 85, MSG_NOSIGNAL, NULL, 0) = 85\nsendto(8, \"<134>Jul 13 00:21:55 postgres[28\"..., 53, MSG_NOSIGNAL, NULL, 0) = 53\nsendto(7, \"\\2\\0\\0\\0\\300\\3\\0\\0\\3@\\0\\0\\t\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0007\\n\\0\\0t_sc\"..., 960, 0, NULL, 0) = 960\nsendto(7, \"\\2\\0\\0\\0\\300\\3\\0\\0\\3@\\0\\0\\t\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0w\\n\\0\\0t_sc\"..., 960, 0, NULL, 0) = 960\nsendto(7, \"\\2\\0\\0\\0\\300\\3\\0\\0\\3@\\0\\0\\t\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0009\\n\\0\\0t_sc\"..., 960, 0, NULL, 0) = 960\nsendto(7, \"\\2\\0\\0\\0\\300\\3\\0\\0\\3@\\0\\0\\t\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0M\\354\\0\\0t_sc\"..., 960, 0, NULL, 0) = 960\nsendto(7, \"\\2\\0\\0\\0 \\2\\0\\0\\3@\\0\\0\\5\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0;\\n\\0\\0t_sc\"..., 544, 0, NULL, 0) = 544\nsendto(7, \"\\2\\0\\0\\0\\350\\0\\0\\0\\0\\0\\0\\0\\2\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\275\\4\\0\\0\\0\\0\\0\\0\"..., 232, 0, NULL, 0) = 232\nsendto(9, \"C\\0\\0\\0\\fPREPARE\\0Z\\0\\0\\0\\5I\", 19, 0, NULL, 0) = 19\nrecvfrom(9, \"Q\\0\\0\\0\\35execute x(1396, 580358);\\0\", 8192, 0, NULL, NULL) = 30\nlseek(18, 0, SEEK_END) = 760832000\nlseek(167, 0, SEEK_END) = 8192\nlseek(18, 0, SEEK_END) = 760832000\nlseek(167, 0, SEEK_END) = 8192\nsendto(9, \"T\\0\\0\\0025\\0\\23col_0_0_\\0\\0\\1\\t\\263\\0\\2\\0\\0\\0\\31\\377\\377\\377\\377\\377\\377\"..., 805, 0, NULL, 0) = 805\nrecvfrom(9, \"X\\0\\0\\0\\4\", 8192, 0, NULL, NULL) = 5\nsendto(7, \"\\2\\0\\0\\0\\300\\3\\0\\0\\3@\\0\\0\\t\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0007\\355\\0\\0\\0\\0\\0\\0\"..., 960, 0, NULL, 0) = 960\nsendto(7, \"\\2\\0\\0\\0P\\1\\0\\0\\3@\\0\\0\\3\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\337\\373\\0\\0\\0\\0\\0\\0\"..., 336, 0, NULL, 0) = 336\nexit_group(0) = ?\nProcess 2826 detached\n\n\nI checked my semaphores at the os level\nipcs -s|grep postgres| wc -l \n54\n\nthey all look like:\n\n0x00eb7970 21168207 postgres 600 17 \n0x00eb7971 21200976 postgres 600 17 \n0x00eb7972 21233745 postgres 600 17 \n0x00eb7973 21266514 postgres 600 17 \n0x00eb7974 21299283 postgres 600 17 \n0x00eb7975 21332052 postgres 600 17 \n0x00eb7976 21364821 postgres 600 17 \n\n\nWhen i restart the DB again things are happy:\npsql -f tt.sql a\nPager usage is off.\nTiming is on.\nPREPARE\nTime: 253.850 ms\n[...]\n(1 row)\n\nTime: 0.891 ms\n\n\nAny ideas?\n\nThanks\n\n",
"msg_date": "Thu, 12 Jul 2012 23:51:22 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow prepare, lots of semop calls."
}
] |
[
{
"msg_contents": "I'm upgrading from 8.4 to 9.1 and experiencing a performance degradation on\na key query with 2 views and 2 tables.\n\nOld server \"PostgreSQL 8.4.10 on i686-redhat-linux-gnu, compiled by GCC gcc\n(GCC) 4.1.2 20080704 (Red Hat 4.1.2-51), 32-bit\"\nNew server \"PostgreSQL 9.1.4 on x86_64-unknown-linux-gnu, compiled by gcc\n(GCC) 4.4.6 20110731 (Red Hat 4.4.6-3), 64-bit\"\n\nThe query is as follows:\nSELECT *\nFROM edge_geom\nWHERE (edge_geom.start_id, edge_geom.end_id) IN (('congo', 'donal'),\n('golow', 'tundo'), ('golow', 'arthur'), ('golow', 'porto'), ('tundo',\n'donal'), ('golow', 'newbo'), ('porto', 'donal'), ('decal', 'donal'),\n('arthur', 'donal'), ('leandro', 'donal'), ('golow', 'decal'), ('golow',\n'salad'), ('newbo', 'donal'), ('golow', 'congo'), ('salad', 'donal'),\n('golow', 'leandro'));\n\nSchema definitions:\nhttp://pastebin.com/0YNG8jSC\nI've tried to simplify the table and view definitions wherever possible.\n\nAnd the query plans:\n8.4: 314ms: http://explain.depesz.com/s/GkX\n9.1: 10,059ms :http://explain.depesz.com/s/txn\n9.1 with setting `enable_material = off`: 1,635ms\nhttp://explain.depesz.com/s/gIu\n\nSo it looks like the Materialize in the query plan is causing the 30x\nslowdown.\nWith the materialize strategy switched off , it's still 5x slower on 9.1\nvs. 8.4.\n\nAny help appreciated, I acknowledge that the tables and views aren't the\nsimplest.\n\nThanks!\n\nEoghan\n\nI'm upgrading from 8.4 to 9.1 and experiencing a performance degradation on a key query with 2 views and 2 tables.\nOld server \"PostgreSQL 8.4.10 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-51), 32-bit\"New server \"PostgreSQL 9.1.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3), 64-bit\"\nThe query is as follows:SELECT *FROM edge_geomWHERE (edge_geom.start_id, edge_geom.end_id) IN (('congo', 'donal'), ('golow', 'tundo'), ('golow', 'arthur'), ('golow', 'porto'), ('tundo', 'donal'), ('golow', 'newbo'), ('porto', 'donal'), ('decal', 'donal'), ('arthur', 'donal'), ('leandro', 'donal'), ('golow', 'decal'), ('golow', 'salad'), ('newbo', 'donal'), ('golow', 'congo'), ('salad', 'donal'), ('golow', 'leandro'));\nSchema definitions:http://pastebin.com/0YNG8jSCI've tried to simplify the table and view definitions wherever possible.\n\nAnd the query plans:8.4: 314ms: http://explain.depesz.com/s/GkX9.1: 10,059ms :http://explain.depesz.com/s/txn\n9.1 with setting `enable_material = off`: 1,635ms http://explain.depesz.com/s/gIuSo it looks like the Materialize in the query plan is causing the 30x slowdown.\nWith the materialize strategy switched off , it's still 5x slower on 9.1 vs. 8.4.Any help appreciated, I acknowledge that the tables and views aren't the simplest.\nThanks!Eoghan",
"msg_date": "Fri, 13 Jul 2012 15:11:23 +0100",
"msg_from": "Eoghan Murray <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor performance problem with Materialize,\n\t8.4 -> 9.1 (enable_material)"
},
{
"msg_contents": "On Fri, Jul 13, 2012 at 11:11 AM, Eoghan Murray <[email protected]> wrote:\n> 8.4: 314ms: http://explain.depesz.com/s/GkX\n> 9.1: 10,059ms :http://explain.depesz.com/s/txn\n> 9.1 with setting `enable_material = off`: 1,635ms\n> http://explain.depesz.com/s/gIu\n\nI think the problem is it's using a merge join, with a sort inside\nthat's producing 600x more rows than expected, while 8.4 does a hash\njoin with no intermediate big tables instead.\n\nWhat's your configuration like in both servers? (that could explain\nplanning differences)\n",
"msg_date": "Fri, 13 Jul 2012 12:53:24 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance problem with Materialize,\n\t8.4 -> 9.1 (enable_material)"
},
{
"msg_contents": "On Fri, Jul 13, 2012 at 1:28 PM, Eoghan Murray <[email protected]> wrote:\n> Thank you Claudio,\n>\n> I haven't touched the 9.1 configuration (with the exception of toggling the\n> enable_material setting). http://pastebin.com/nDjcYrUd\n> As far as I can remember I haven't changed the 8.4 configuration:\n> http://pastebin.com/w4XhDRX4\n\nMaybe that's your problem. Postgres default configuration is not only\nsuboptimal, but also a poor reflection of your hardware (what's your\nhardware, btw?). Which means postgres' expected costs won't hold. I'm\nthinking especially about your effective_cache_size, which may\ninfluence postgres' decision to use one join method vs another, but\nmany other settings would probable influence.\n\nSpend a bit of time to configure both servers such that the\nconfiguration reflects the hardware, and try your queries again.\n",
"msg_date": "Fri, 13 Jul 2012 13:40:31 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance problem with Materialize,\n\t8.4 -> 9.1 (enable_material)"
},
{
"msg_contents": "Eoghan Murray <[email protected]> writes:\n> I'm upgrading from 8.4 to 9.1 and experiencing a performance degradation on\n> a key query with 2 views and 2 tables.\n\nI think the core of the problem is the lousy rowcount estimate for the\nresult of the edited_stop_2 view: when you've got 1 row estimated and\nalmost 10000 rows actual, it's almost guaranteed that the rest of the\nplan is going to be bad. It's pure luck that 8.4 chooses a plan that\nfails to suck, because it's optimizing for the wrong case. 9.1 isn't\nso lucky, but that doesn't make 9.1 broken, just less lucky.\n\nI'm not terribly disappointed that that rowcount estimate is bad,\nbecause this seems like a rather weird and inefficient way to do \"get\nthe rows with the maximal \"updated\" values\". I'd suggest experimenting\nwith some other definitions for edited_stop_2, such as using a subquery:\n\n SELECT ...\n FROM stop o\n WHERE updated = (select max(updated) from stop i\n where o.node_id = i.node_id and ...);\n\nThis might be reasonably efficient given your pkey index for \"stop\".\n\nOr if you don't mind using a Postgres-ism, you could try DISTINCT ON:\n\n SELECT DISTINCT ON (node_id, org_id, edge_id, stop_pos) ...\n FROM stop\n ORDER BY node_id DESC, org_id DESC, edge_id DESC, stop_pos DESC, updated DESC;\n\nSee the \"weather reports\" example in our SELECT reference page for\nsome explanation of how that works. Again, the ORDER BY is chosen\nto match your pkey index; I'm not sure that the planner will think\na full-index scan beats a seqscan-and-sort, but you may as well\ngive it the option.\n\nOf these, I would bet that the first will work better if your typical\nusage is such that only a few rows need to be fetched from the view.\nI believe the DISTINCT ON is likely to act as an optimization fence\nforcing the whole view to be evaluated when using the second definition.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Jul 2012 13:40:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance problem with Materialize,\n\t8.4 -> 9.1 (enable_material)"
},
{
"msg_contents": "On Fri, Jul 13, 2012 at 3:22 PM, Eoghan Murray <[email protected]> wrote:\n> This is with `enable_material=off`, with `enable_material=on` it also\n> doesn't go for the Merge Join, but the Materialize step pushes it up to over\n> 7,000ms.\n\nI think this one could stem from what Tom observed, that the rowcount\nestimate is way off.\n\nIt usually helps a great deal if you can get PG to estimate correctly,\nbe it by doing analyze, incrementing statistics or simply formulating\nyour query in a way that's friendlier for PG's estimator.\n",
"msg_date": "Fri, 13 Jul 2012 15:56:50 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance problem with Materialize,\n\t8.4 -> 9.1 (enable_material)"
}
] |
[
{
"msg_contents": "Dear Friends,\n\nIs there a tool available to perform Data Model review, from a performance\nperspective?\nOne which can be used to check if the data model is optimal or not.\n\nThanks,\nSreejith.\n\nDear Friends,\n Is there a tool available to perform Data Model review, from a performance perspective?\nOne which can be used to check if the data model is optimal or not.\nThanks,\n Sreejith.",
"msg_date": "Fri, 13 Jul 2012 21:55:20 +0530",
"msg_from": "B Sreejith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is there a tool available to perform Data Model review,\n\tfrom a performance perspective?"
}
] |
[
{
"msg_contents": "Hello,\n\nOur postgres 9.0 DB has one table (the important one) where the bulk of \ninsertions is happening. We are looking more or less at around 15K to \n20K insertions per minute and my measurements give me a rate of 0.60 to \n1 msec per insertion. A summary of the table where the insertions are \nhappening is as follows:\n\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: msg_id \nbigint NOT NULL DEFAULT \nnextval('feed_all_y2012m07.messages_msg_id_seq'::regclass),\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: msg_type \nsmallint NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: obj_id \ninteger NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \nmsg_date_rec timestamp without time zone NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: msg_text \ntext NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \nmsg_expanded boolean NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: msg_time \ntime without time zone,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \npos_accuracy boolean NOT NULL DEFAULT false,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: pos_raim \nboolean NOT NULL DEFAULT false,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: pos_lon \ninteger NOT NULL DEFAULT (181 * 600000),\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: pos_lat \ninteger NOT NULL DEFAULT (91 * 60000),\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \npos_georef1 character varying(2) NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \npos_georef2 character varying(2) NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \npos_georef3 character varying(2) NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \npos_georef4 character varying(2) NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \npos_point geometry,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \nship_speed smallint NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \nship_course smallint NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \nship_heading smallint NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \nship_second smallint NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \nship_radio integer NOT NULL,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \nship_status ais_server.nav_status NOT NULL DEFAULT \n'NOT_DEFINED'::ais_server.nav_status,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \nship_turn smallint NOT NULL DEFAULT 128,\n-- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \nship_maneuver smallint NOT NULL,\n CONSTRAINT ship_a_pos_messages_wk0_pkey PRIMARY KEY (msg_id )\n\nThe table is created in table space \"Data\" while its indexes in table \nspace \"Index\" (a different HD). Now once the database is empty the \nconfiguration is flying but of course this is not the case always. 5 \ndays later and around 55,000,000 rows later the insertions are literally \nso slow that the application server has to drop inserts in order to keep \nup. To be precise we are looking now at 1 insertion every 5 to 10, \nsometimes 25 msec!!\n\nAfter lots of tuning both on the postgres server and the stored procs, \nafter installing 18G Ram and appropriately changing the shared_buffers, \nworking_mem etc, we realized that our index hard disk had 100% \nutilization and essentially it was talking longer to update the indexes \nthan to update the table. Well I took a radical approach and dropped all \nthe indexes and... miracle, the db got back in to life, insertion went \nback to a healthy 0.70msec but of course now I have no indexes. It is my \nbelief that I am doing something fundamentally wrong with the index \ncreation as 4 indexes cannot really bring a database to a halt. Here are \nthe indexes I was using:\n\nCREATE INDEX idx_ship_a_pos_messages_wk0_date_pos\n ON feed_all_y2012m07.ship_a_pos_messages_wk0\n USING btree\n (msg_date_rec , pos_georef1 , pos_georef2 , pos_georef3 , pos_georef4 )\nTABLESPACE index;\n\nCREATE INDEX idx_ship_a_pos_messages_wk0_date_rec\n ON feed_all_y2012m07.ship_a_pos_messages_wk0\n USING btree\n (msg_date_rec )\nTABLESPACE index;\n\nCREATE INDEX idx_ship_a_pos_messages_wk0_object\n ON feed_all_y2012m07.ship_a_pos_messages_wk0\n USING btree\n (obj_id , msg_type , msg_text , msg_date_rec )\nTABLESPACE index;\n\nCREATE INDEX idx_ship_a_pos_messages_wk0_pos\n ON feed_all_y2012m07.ship_a_pos_messages_wk0\n USING btree\n (pos_georef1 , pos_georef2 , pos_georef3 , pos_georef4 )\nTABLESPACE index;\n\nAs I have run out of ideas any help will be really appreciated. For the \ntime being i can live without indexes but sooner or later people will \nneed to access the live data. I don't even dare to think what will \nhappen to the database if I only introduce a spatial GIS index that I \nneed. Question: Is there any possibility that I must include the primary \nkey into my index to \"help\" during indexing? If I remember well MS-SQL \nhas such a \"feature\".\n\nKind Regards\nYiannis\n\n",
"msg_date": "Sun, 15 Jul 2012 02:14:45 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index slow down insertions..."
},
{
"msg_contents": "On 15/07/2012 02:14, Ioannis Anagnostopoulos wrote:\n> Hello,\n>\n> Our postgres 9.0 DB has one table (the important one) where the bulk \n> of insertions is happening. We are looking more or less at around 15K \n> to 20K insertions per minute and my measurements give me a rate of \n> 0.60 to 1 msec per insertion. A summary of the table where the \n> insertions are happening is as follows:\n>\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: msg_id \n> bigint NOT NULL DEFAULT \n> nextval('feed_all_y2012m07.messages_msg_id_seq'::regclass),\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> msg_type smallint NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: obj_id \n> integer NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> msg_date_rec timestamp without time zone NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> msg_text text NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> msg_expanded boolean NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> msg_time time without time zone,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> pos_accuracy boolean NOT NULL DEFAULT false,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> pos_raim boolean NOT NULL DEFAULT false,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: pos_lon \n> integer NOT NULL DEFAULT (181 * 600000),\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: pos_lat \n> integer NOT NULL DEFAULT (91 * 60000),\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> pos_georef1 character varying(2) NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> pos_georef2 character varying(2) NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> pos_georef3 character varying(2) NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> pos_georef4 character varying(2) NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> pos_point geometry,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> ship_speed smallint NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> ship_course smallint NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> ship_heading smallint NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> ship_second smallint NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> ship_radio integer NOT NULL,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> ship_status ais_server.nav_status NOT NULL DEFAULT \n> 'NOT_DEFINED'::ais_server.nav_status,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> ship_turn smallint NOT NULL DEFAULT 128,\n> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n> ship_maneuver smallint NOT NULL,\n> CONSTRAINT ship_a_pos_messages_wk0_pkey PRIMARY KEY (msg_id )\n>\n> The table is created in table space \"Data\" while its indexes in table \n> space \"Index\" (a different HD). Now once the database is empty the \n> configuration is flying but of course this is not the case always. 5 \n> days later and around 55,000,000 rows later the insertions are \n> literally so slow that the application server has to drop inserts in \n> order to keep up. To be precise we are looking now at 1 insertion \n> every 5 to 10, sometimes 25 msec!!\n>\n> After lots of tuning both on the postgres server and the stored procs, \n> after installing 18G Ram and appropriately changing the \n> shared_buffers, working_mem etc, we realized that our index hard disk \n> had 100% utilization and essentially it was talking longer to update \n> the indexes than to update the table. Well I took a radical approach \n> and dropped all the indexes and... miracle, the db got back in to \n> life, insertion went back to a healthy 0.70msec but of course now I \n> have no indexes. It is my belief that I am doing something \n> fundamentally wrong with the index creation as 4 indexes cannot really \n> bring a database to a halt. Here are the indexes I was using:\n>\n> CREATE INDEX idx_ship_a_pos_messages_wk0_date_pos\n> ON feed_all_y2012m07.ship_a_pos_messages_wk0\n> USING btree\n> (msg_date_rec , pos_georef1 , pos_georef2 , pos_georef3 , pos_georef4 )\n> TABLESPACE index;\n>\n> CREATE INDEX idx_ship_a_pos_messages_wk0_date_rec\n> ON feed_all_y2012m07.ship_a_pos_messages_wk0\n> USING btree\n> (msg_date_rec )\n> TABLESPACE index;\n>\n> CREATE INDEX idx_ship_a_pos_messages_wk0_object\n> ON feed_all_y2012m07.ship_a_pos_messages_wk0\n> USING btree\n> (obj_id , msg_type , msg_text , msg_date_rec )\n> TABLESPACE index;\n>\n> CREATE INDEX idx_ship_a_pos_messages_wk0_pos\n> ON feed_all_y2012m07.ship_a_pos_messages_wk0\n> USING btree\n> (pos_georef1 , pos_georef2 , pos_georef3 , pos_georef4 )\n> TABLESPACE index;\n>\n> As I have run out of ideas any help will be really appreciated. For \n> the time being i can live without indexes but sooner or later people \n> will need to access the live data. I don't even dare to think what \n> will happen to the database if I only introduce a spatial GIS index \n> that I need. Question: Is there any possibility that I must include \n> the primary key into my index to \"help\" during indexing? If I remember \n> well MS-SQL has such a \"feature\".\n>\n> Kind Regards\n> Yiannis\n>\n>\nSome more information regarding this \"problem\". I start to believe that \nthe problem is mainly due to the autovacum that happens to prevent \nwraparound. As our database is heavily used with inserts, wraparounds \nare happing very often. The vacuums that are triggered to deal with the \nsituation have an adverse effect on the index HD. In essence as the \ndatabase covers 12 months of data an autovacuum to prevent wrap around \nis more or less constantly present starving the actual data insertion \nprocess from index HD resources (especially when those indexes are quite \na lot as I said in my previous post). Now, given the fact that only the \n\"current\" month is updated with inserts while the previous months are \nessentially ready-only(static) I think that moving the indexes of the \npast months to an archive HD or dropping those that are not necessary \nany more would probably solve the problem. Does my theory hold any water?\n\nKind Regards\nYiannis\n\n",
"msg_date": "Mon, 16 Jul 2012 11:24:27 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index slow down insertions..."
},
{
"msg_contents": "On 16/07/2012 11:24, Ioannis Anagnostopoulos wrote:\n> On 15/07/2012 02:14, Ioannis Anagnostopoulos wrote:\n>> Hello,\n>>\n>> Our postgres 9.0 DB has one table (the important one) where the bulk \n>> of insertions is happening. We are looking more or less at around 15K \n>> to 20K insertions per minute and my measurements give me a rate of \n>> 0.60 to 1 msec per insertion. A summary of the table where the \n>> insertions are happening is as follows:\n>>\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: msg_id \n>> bigint NOT NULL DEFAULT \n>> nextval('feed_all_y2012m07.messages_msg_id_seq'::regclass),\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> msg_type smallint NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: obj_id \n>> integer NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> msg_date_rec timestamp without time zone NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> msg_text text NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> msg_expanded boolean NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> msg_time time without time zone,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> pos_accuracy boolean NOT NULL DEFAULT false,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> pos_raim boolean NOT NULL DEFAULT false,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> pos_lon integer NOT NULL DEFAULT (181 * 600000),\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> pos_lat integer NOT NULL DEFAULT (91 * 60000),\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> pos_georef1 character varying(2) NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> pos_georef2 character varying(2) NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> pos_georef3 character varying(2) NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> pos_georef4 character varying(2) NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> pos_point geometry,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> ship_speed smallint NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> ship_course smallint NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> ship_heading smallint NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> ship_second smallint NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> ship_radio integer NOT NULL,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> ship_status ais_server.nav_status NOT NULL DEFAULT \n>> 'NOT_DEFINED'::ais_server.nav_status,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> ship_turn smallint NOT NULL DEFAULT 128,\n>> -- Inherited from table feed_all_y2012m07.ship_a_pos_messages: \n>> ship_maneuver smallint NOT NULL,\n>> CONSTRAINT ship_a_pos_messages_wk0_pkey PRIMARY KEY (msg_id )\n>>\n>> The table is created in table space \"Data\" while its indexes in table \n>> space \"Index\" (a different HD). Now once the database is empty the \n>> configuration is flying but of course this is not the case always. 5 \n>> days later and around 55,000,000 rows later the insertions are \n>> literally so slow that the application server has to drop inserts in \n>> order to keep up. To be precise we are looking now at 1 insertion \n>> every 5 to 10, sometimes 25 msec!!\n>>\n>> After lots of tuning both on the postgres server and the stored \n>> procs, after installing 18G Ram and appropriately changing the \n>> shared_buffers, working_mem etc, we realized that our index hard disk \n>> had 100% utilization and essentially it was talking longer to update \n>> the indexes than to update the table. Well I took a radical approach \n>> and dropped all the indexes and... miracle, the db got back in to \n>> life, insertion went back to a healthy 0.70msec but of course now I \n>> have no indexes. It is my belief that I am doing something \n>> fundamentally wrong with the index creation as 4 indexes cannot \n>> really bring a database to a halt. Here are the indexes I was using:\n>>\n>> CREATE INDEX idx_ship_a_pos_messages_wk0_date_pos\n>> ON feed_all_y2012m07.ship_a_pos_messages_wk0\n>> USING btree\n>> (msg_date_rec , pos_georef1 , pos_georef2 , pos_georef3 , \n>> pos_georef4 )\n>> TABLESPACE index;\n>>\n>> CREATE INDEX idx_ship_a_pos_messages_wk0_date_rec\n>> ON feed_all_y2012m07.ship_a_pos_messages_wk0\n>> USING btree\n>> (msg_date_rec )\n>> TABLESPACE index;\n>>\n>> CREATE INDEX idx_ship_a_pos_messages_wk0_object\n>> ON feed_all_y2012m07.ship_a_pos_messages_wk0\n>> USING btree\n>> (obj_id , msg_type , msg_text , msg_date_rec )\n>> TABLESPACE index;\n>>\n>> CREATE INDEX idx_ship_a_pos_messages_wk0_pos\n>> ON feed_all_y2012m07.ship_a_pos_messages_wk0\n>> USING btree\n>> (pos_georef1 , pos_georef2 , pos_georef3 , pos_georef4 )\n>> TABLESPACE index;\n>>\n>> As I have run out of ideas any help will be really appreciated. For \n>> the time being i can live without indexes but sooner or later people \n>> will need to access the live data. I don't even dare to think what \n>> will happen to the database if I only introduce a spatial GIS index \n>> that I need. Question: Is there any possibility that I must include \n>> the primary key into my index to \"help\" during indexing? If I \n>> remember well MS-SQL has such a \"feature\".\n>>\n>> Kind Regards\n>> Yiannis\n>>\n>>\n> Some more information regarding this \"problem\". I start to believe \n> that the problem is mainly due to the autovacum that happens to \n> prevent wraparound. As our database is heavily used with inserts, \n> wraparounds are happing very often. The vacuums that are triggered to \n> deal with the situation have an adverse effect on the index HD. In \n> essence as the database covers 12 months of data an autovacuum to \n> prevent wrap around is more or less constantly present starving the \n> actual data insertion process from index HD resources (especially when \n> those indexes are quite a lot as I said in my previous post). Now, \n> given the fact that only the \"current\" month is updated with inserts \n> while the previous months are essentially ready-only(static) I think \n> that moving the indexes of the past months to an archive HD or \n> dropping those that are not necessary any more would probably solve \n> the problem. Does my theory hold any water?\n>\n> Kind Regards\n> Yiannis\n>\n>\nHello again, sorry for topping up the thread but I think that the more \ninformation I provide you the more likely it is to get an answer. So as \nI go along, I have stripped completely the database from additional \nindexes, those that possible delay the insertion process, of course \nmaintaining the pkey and 2 or three absolutely mandatory indexes for my \nselect queries. As a result I have a sleek and steady performance of \naround 0.70 msec per insertion. However I have now closed a full circle \nas I have a fast database but when I try to \"select\", making optimum \nusage of the left over indexes, the insertion process slows down. Yes my \nselections are huge (they are not slow, just huge as it is about \ngeographical points etc) but I am asking if there is anyway that I can \n\"prioritise\" the insertions over the \"selections\". These \"selections\" \nare happening anyway as batch process during night so I don't really \nmind if they will take 2 or 5 hours, as long as they are ready at 9.00am \nnext day. Again any advice will be highly appreciated.\n\nKind Regards\nYiannis\n",
"msg_date": "Thu, 19 Jul 2012 13:24:34 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index slow down insertions..."
},
{
"msg_contents": "Hello,\nthe following query seems to take ages to get executed. However I am \nmore than sure (as you can see from the explain analyse) that uses all \nthe correct indexes. In general I have serious issues with joins in my \ndatabase. This is a Postgres ver. 9.0 running postgis with the \n\"_int.sql\" contrib enabled. Further more I think that the execution of \nthis query seriously degrades the performance of the database. I had to \ndevice this query and run it like an overnight batch to populate a \ntable as I couldn't afford users to execute it over and over in a \"need \nto do\" base. Unfortunately it is still slow and some times it either \nbrings down the whole database (my insertions are buffered on the app \nserver) or it never completes before morning.\n\nSELECT\n src_id,\n date_trunc('day', message_copies.msg_date_rec) as date_count,\n message_copies.pos_georef1,\n message_copies.pos_georef2,\n message_copies.pos_georef3,\n message_copies.pos_georef4,\n ais_server.array_accum(CASE WHEN msg_type BETWEEN 1 and 3 \nTHEN message_copies.msg_id END) as msgA_array,\n ais_server.array_accum(CASE WHEN msg_type = 18 THEN \nmessage_copies.msg_id END) as msgB_std_array,\n ais_server.array_accum(CASE WHEN msg_type = 19 THEN \nmessage_copies.msg_id END) as msgB_ext_array,\n uniq\n (\n ais_server.array_accum(CASE WHEN obj_type = 'SHIP_TYPE_A' \nTHEN obj_mmsi END)\n ) as mmsi_type_A_array,\n uniq\n (\n ais_server.array_accum(CASE WHEN obj_type = 'SHIP_TYPE_B' \nTHEN obj_mmsi END)\n ) as mmsi_type_B_array,\n avg(ship_speed) / 10.0 as avg_speed,\n avg(ship_heading) as avg_heading,\n avg(ship_course) / 10.0 as avg_course,\n ST_Multi(ST_Collect(ship_pos_messages.pos_point)) as geom\n from\n feed_all_y2012m07.message_copies join\n (feed_all_y2012m07.ship_pos_messages join \nais_server.ship_objects on (ship_pos_messages.obj_id = \nship_objects.obj_id))\n on (message_copies.msg_id = ship_pos_messages.msg_id)\n where\n extract('day' from message_copies.msg_date_rec) = 17\n and date_trunc('day', message_copies.msg_date_rec) = '2012-07-17'\n and message_copies.src_id = 1\n and (message_copies.pos_georef1 || message_copies.pos_georef2 \n|| message_copies.pos_georef3 || message_copies.pos_georef4) <> ''\n and not (message_copies.pos_georef1 || \nmessage_copies.pos_georef2 || message_copies.pos_georef3 || \nmessage_copies.pos_georef4) is null\n and extract('day' from ship_pos_messages.msg_date_rec) = 17\n group by src_id, date_count, message_copies.pos_georef1, \nmessage_copies.pos_georef2, message_copies.pos_georef3, \nmessage_copies.pos_georef4;\n\nWhat follows is the Explain Analyze:\n\"HashAggregate (cost=21295.20..21298.51 rows=53 width=148) (actual \ntime=17832235.321..17832318.546 rows=2340 loops=1)\"\n\" -> Nested Loop (cost=0.00..21293.21 rows=53 width=148) (actual \ntime=62.188..17801780.764 rows=387105 loops=1)\"\n\" -> Nested Loop (cost=0.00..20942.93 rows=53 width=144) \n(actual time=62.174..17783236.718 rows=387105 loops=1)\"\n\" Join Filter: (feed_all_y2012m07.message_copies.msg_id = \nfeed_all_y2012m07.ship_pos_messages.msg_id)\"\n\" -> Append (cost=0.00..19057.93 rows=53 width=33) \n(actual time=62.124..5486473.545 rows=387524 loops=1)\"\n\" -> Seq Scan on message_copies (cost=0.00..0.00 \nrows=1 width=68) (actual time=0.000..0.000 rows=0 loops=1)\"\n\" Filter: ((src_id = 1) AND \n(date_trunc('day'::text, msg_date_rec) = '2012-07-17 \n00:00:00'::timestamp without time zone) AND (date_part('day'::text, \nmsg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text \n|| (pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text) \nIS NULL)) AND (((((pos_georef1)::text || (pos_georef2)::text) || \n(pos_georef3)::text) || (pos_georef4)::text) <> ''::text))\"\n\" -> Index Scan using \nidx_message_copies_wk2_date_src_pos_partial on message_copies_wk2 \nmessage_copies (cost=0.00..19057.93 rows=52 width=32) (actual \ntime=62.124..5486270.845 rows=387524 loops=1)\"\n\" Index Cond: ((date_trunc('day'::text, \nmsg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone) AND \n(src_id = 1))\"\n\" Filter: ((date_part('day'::text, \nmsg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text \n|| (pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text) \nIS NULL)) AND (((((pos_georef1)::text || (pos_georef2)::text) || \n(pos_georef3)::text) || (pos_georef4)::text) <> ''::text))\"\n\" -> Append (cost=0.00..35.50 rows=5 width=93) (actual \ntime=31.684..31.724 rows=1 loops=387524)\"\n\" -> Seq Scan on ship_pos_messages (cost=0.00..0.00 \nrows=1 width=52) (actual time=0.001..0.001 rows=0 loops=387524)\"\n\" Filter: (date_part('day'::text, \nfeed_all_y2012m07.ship_pos_messages.msg_date_rec) = 17::double precision)\"\n\" -> Seq Scan on ship_a_pos_messages \nship_pos_messages (cost=0.00..0.00 rows=1 width=52) (actual \ntime=0.000..0.000 rows=0 loops=387524)\"\n\" Filter: (date_part('day'::text, \nfeed_all_y2012m07.ship_pos_messages.msg_date_rec) = 17::double precision)\"\n\" -> Index Scan using ship_b_std_pos_messages_pkey \non ship_b_std_pos_messages ship_pos_messages (cost=0.00..9.03 rows=1 \nwidth=120) (actual time=0.008..0.008 rows=0 loops=387524)\"\n\" Index Cond: \n(feed_all_y2012m07.ship_pos_messages.msg_id = \nfeed_all_y2012m07.message_copies.msg_id)\"\n\" Filter: (date_part('day'::text, \nfeed_all_y2012m07.ship_pos_messages.msg_date_rec) = 17::double precision)\"\n\" -> Index Scan using ship_b_ext_pos_messages_pkey \non ship_b_ext_pos_messages ship_pos_messages (cost=0.00..7.90 rows=1 \nwidth=120) (actual time=0.004..0.004 rows=0 loops=387524)\"\n\" Index Cond: \n(feed_all_y2012m07.ship_pos_messages.msg_id = \nfeed_all_y2012m07.message_copies.msg_id)\"\n\" Filter: (date_part('day'::text, \nfeed_all_y2012m07.ship_pos_messages.msg_date_rec) = 17::double precision)\"\n\" -> Index Scan using ship_a_pos_messages_wk2_pkey \non ship_a_pos_messages_wk2 ship_pos_messages (cost=0.00..18.57 rows=1 \nwidth=120) (actual time=31.670..31.710 rows=1 loops=387524)\"\n\" Index Cond: \n(feed_all_y2012m07.ship_pos_messages.msg_id = \nfeed_all_y2012m07.message_copies.msg_id)\"\n\" Filter: (date_part('day'::text, \nfeed_all_y2012m07.ship_pos_messages.msg_date_rec) = 17::double precision)\"\n\" -> Index Scan using ship_objects_pkey on ship_objects \n(cost=0.00..6.59 rows=1 width=12) (actual time=0.041..0.044 rows=1 \nloops=387105)\"\n\" Index Cond: (ship_objects.obj_id = \nfeed_all_y2012m07.ship_pos_messages.obj_id)\"\n\"Total runtime: 17832338.082 ms\"\n\nA few more information: feed_all_y2012m07.message_copies_wk2 has 24.5 \nmillion rows only for the 17th of July and more or less the same amount \nfor rows per day since the 15th that I started populating it. So I guess \nwe are looking around 122million rows. The tables are populated with \naround 16K rows per minute.\n\nAs always any help will be greatly appreciated.\nKind Regards\nYiannis\n\n\n\n\n\n\n Hello, \n the following query seems to take ages to get executed. However I am\n more than sure (as you can see from the explain analyse) that uses\n all the correct indexes. In general I have serious issues with joins\n in my database. This is a Postgres ver. 9.0 running postgis with the\n \"_int.sql\" contrib enabled. Further more I think that the execution\n of this query seriously degrades the performance of the database. I\n had to device this query and run it like an overnight batch to\n populate a table as I couldn't afford users to execute it over and\n over in a \"need to do\" base. Unfortunately it is still slow and some\n times it either brings down the whole database (my insertions are\n buffered on the app server) or it never completes before morning.\n\nSELECT \n src_id,\n date_trunc('day', message_copies.msg_date_rec) as\n date_count,\n message_copies.pos_georef1,\n message_copies.pos_georef2,\n message_copies.pos_georef3,\n message_copies.pos_georef4,\n ais_server.array_accum(CASE WHEN msg_type BETWEEN 1\n and 3 THEN message_copies.msg_id END) as msgA_array,\n ais_server.array_accum(CASE WHEN msg_type = 18 THEN\n message_copies.msg_id END) as msgB_std_array,\n ais_server.array_accum(CASE WHEN msg_type = 19 THEN\n message_copies.msg_id END) as msgB_ext_array,\n uniq\n (\n ais_server.array_accum(CASE WHEN obj_type =\n 'SHIP_TYPE_A' THEN obj_mmsi END)\n ) as mmsi_type_A_array,\n uniq\n (\n ais_server.array_accum(CASE WHEN obj_type =\n 'SHIP_TYPE_B' THEN obj_mmsi END)\n ) as mmsi_type_B_array,\n avg(ship_speed) / 10.0 as avg_speed,\n avg(ship_heading) as avg_heading,\n avg(ship_course) / 10.0 as avg_course,\n ST_Multi(ST_Collect(ship_pos_messages.pos_point)) as\n geom\n from \n feed_all_y2012m07.message_copies join \n (feed_all_y2012m07.ship_pos_messages join\n ais_server.ship_objects on (ship_pos_messages.obj_id =\n ship_objects.obj_id)) \n on (message_copies.msg_id =\n ship_pos_messages.msg_id)\n where \n extract('day' from message_copies.msg_date_rec) = 17\n and date_trunc('day', message_copies.msg_date_rec) =\n '2012-07-17'\n and message_copies.src_id = 1\n and (message_copies.pos_georef1 ||\n message_copies.pos_georef2 || message_copies.pos_georef3 ||\n message_copies.pos_georef4) <> ''\n and not (message_copies.pos_georef1 ||\n message_copies.pos_georef2 || message_copies.pos_georef3 ||\n message_copies.pos_georef4) is null\n and extract('day' from ship_pos_messages.msg_date_rec) =\n 17 \n group by src_id, date_count, message_copies.pos_georef1,\n message_copies.pos_georef2, message_copies.pos_georef3,\n message_copies.pos_georef4;\n\n What follows is the Explain Analyze:\n \"HashAggregate (cost=21295.20..21298.51 rows=53 width=148)\n (actual time=17832235.321..17832318.546 rows=2340 loops=1)\"\n \" -> Nested Loop (cost=0.00..21293.21 rows=53 width=148)\n (actual time=62.188..17801780.764 rows=387105 loops=1)\"\n \" -> Nested Loop (cost=0.00..20942.93 rows=53\n width=144) (actual time=62.174..17783236.718 rows=387105\n loops=1)\"\n \" Join Filter:\n (feed_all_y2012m07.message_copies.msg_id =\n feed_all_y2012m07.ship_pos_messages.msg_id)\"\n \" -> Append (cost=0.00..19057.93 rows=53\n width=33) (actual time=62.124..5486473.545 rows=387524 loops=1)\"\n \" -> Seq Scan on message_copies \n (cost=0.00..0.00 rows=1 width=68) (actual time=0.000..0.000\n rows=0 loops=1)\"\n \" Filter: ((src_id = 1) AND\n (date_trunc('day'::text, msg_date_rec) = '2012-07-17\n 00:00:00'::timestamp without time zone) AND\n (date_part('day'::text, msg_date_rec) = 17::double precision)\n AND (NOT (((((pos_georef1)::text || (pos_georef2)::text) ||\n (pos_georef3)::text) || (pos_georef4)::text) IS NULL)) AND\n (((((pos_georef1)::text || (pos_georef2)::text) ||\n (pos_georef3)::text) || (pos_georef4)::text) <>\n ''::text))\"\n \" -> Index Scan using\n idx_message_copies_wk2_date_src_pos_partial on\n message_copies_wk2 message_copies (cost=0.00..19057.93 rows=52\n width=32) (actual time=62.124..5486270.845 rows=387524 loops=1)\"\n \" Index Cond: ((date_trunc('day'::text,\n msg_date_rec) = '2012-07-17 00:00:00'::timestamp without time\n zone) AND (src_id = 1))\"\n \" Filter: ((date_part('day'::text,\n msg_date_rec) = 17::double precision) AND (NOT\n (((((pos_georef1)::text || (pos_georef2)::text) ||\n (pos_georef3)::text) || (pos_georef4)::text) IS NULL)) AND\n (((((pos_georef1)::text || (pos_georef2)::text) ||\n (pos_georef3)::text) || (pos_georef4)::text) <>\n ''::text))\"\n \" -> Append (cost=0.00..35.50 rows=5 width=93)\n (actual time=31.684..31.724 rows=1 loops=387524)\"\n \" -> Seq Scan on ship_pos_messages \n (cost=0.00..0.00 rows=1 width=52) (actual time=0.001..0.001\n rows=0 loops=387524)\"\n \" Filter: (date_part('day'::text,\n feed_all_y2012m07.ship_pos_messages.msg_date_rec) = 17::double\n precision)\"\n \" -> Seq Scan on ship_a_pos_messages\n ship_pos_messages (cost=0.00..0.00 rows=1 width=52) (actual\n time=0.000..0.000 rows=0 loops=387524)\"\n \" Filter: (date_part('day'::text,\n feed_all_y2012m07.ship_pos_messages.msg_date_rec) = 17::double\n precision)\"\n \" -> Index Scan using\n ship_b_std_pos_messages_pkey on ship_b_std_pos_messages\n ship_pos_messages (cost=0.00..9.03 rows=1 width=120) (actual\n time=0.008..0.008 rows=0 loops=387524)\"\n \" Index Cond:\n (feed_all_y2012m07.ship_pos_messages.msg_id =\n feed_all_y2012m07.message_copies.msg_id)\"\n \" Filter: (date_part('day'::text,\n feed_all_y2012m07.ship_pos_messages.msg_date_rec) = 17::double\n precision)\"\n \" -> Index Scan using\n ship_b_ext_pos_messages_pkey on ship_b_ext_pos_messages\n ship_pos_messages (cost=0.00..7.90 rows=1 width=120) (actual\n time=0.004..0.004 rows=0 loops=387524)\"\n \" Index Cond:\n (feed_all_y2012m07.ship_pos_messages.msg_id =\n feed_all_y2012m07.message_copies.msg_id)\"\n \" Filter: (date_part('day'::text,\n feed_all_y2012m07.ship_pos_messages.msg_date_rec) = 17::double\n precision)\"\n \" -> Index Scan using\n ship_a_pos_messages_wk2_pkey on ship_a_pos_messages_wk2\n ship_pos_messages (cost=0.00..18.57 rows=1 width=120) (actual\n time=31.670..31.710 rows=1 loops=387524)\"\n \" Index Cond:\n (feed_all_y2012m07.ship_pos_messages.msg_id =\n feed_all_y2012m07.message_copies.msg_id)\"\n \" Filter: (date_part('day'::text,\n feed_all_y2012m07.ship_pos_messages.msg_date_rec) = 17::double\n precision)\"\n \" -> Index Scan using ship_objects_pkey on\n ship_objects (cost=0.00..6.59 rows=1 width=12) (actual\n time=0.041..0.044 rows=1 loops=387105)\"\n \" Index Cond: (ship_objects.obj_id =\n feed_all_y2012m07.ship_pos_messages.obj_id)\"\n \"Total runtime: 17832338.082 ms\"\n\n A few more information: feed_all_y2012m07.message_copies_wk2 has\n 24.5 million rows only for the 17th of July and more or less the\n same amount for rows per day since the 15th that I started\n populating it. So I guess we are looking around 122million rows. The\n tables are populated with around 16K rows per minute.\n\n As always any help will be greatly appreciated.\n Kind Regards\n Yiannis",
"msg_date": "Fri, 20 Jul 2012 22:19:22 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "A very long running query...."
},
{
"msg_contents": "On Fri, Jul 20, 2012 at 6:19 PM, Ioannis Anagnostopoulos\n<[email protected]> wrote:\n> \" -> Nested Loop (cost=0.00..20942.93 rows=53 width=144) (actual\n> time=62.174..17783236.718 rows=387105 loops=1)\"\n> \" Join Filter: (feed_all_y2012m07.message_copies.msg_id =\n> feed_all_y2012m07.ship_pos_messages.msg_id)\"\n> \" -> Append (cost=0.00..19057.93 rows=53 width=33) (actual\n> time=62.124..5486473.545 rows=387524 loops=1)\"\n\nMisestimated row counts... did you try running an analyze, or upping\nstatistic targets?\n",
"msg_date": "Fri, 20 Jul 2012 18:23:52 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 20/07/2012 22:23, Claudio Freire wrote:\n> On Fri, Jul 20, 2012 at 6:19 PM, Ioannis Anagnostopoulos\n> <[email protected]> wrote:\n>> \" -> Nested Loop (cost=0.00..20942.93 rows=53 width=144) (actual\n>> time=62.174..17783236.718 rows=387105 loops=1)\"\n>> \" Join Filter: (feed_all_y2012m07.message_copies.msg_id =\n>> feed_all_y2012m07.ship_pos_messages.msg_id)\"\n>> \" -> Append (cost=0.00..19057.93 rows=53 width=33) (actual\n>> time=62.124..5486473.545 rows=387524 loops=1)\"\n> Misestimated row counts... did you try running an analyze, or upping\n> statistic targets?\nI have run analyse every so often. I think the problem is that as I get \n16K new rows every minutes, the \"stats\" are always out... Possible?\n",
"msg_date": "Fri, 20 Jul 2012 22:27:31 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On Fri, Jul 20, 2012 at 2:27 PM, Ioannis Anagnostopoulos\n<[email protected]> wrote:\n> On 20/07/2012 22:23, Claudio Freire wrote:\n>> Misestimated row counts... did you try running an analyze, or upping\n>> statistic targets?\n> I have run analyse every so often. I think the problem is that as I get 16K\n> new rows every minutes, the \"stats\" are always out... Possible?\n\nIt may not help much with any skew in your data that results from\ndivergent data appearing, but you can update the statistics targets\nfor those columns and analyze again, and the planner should have much\nbetter information about the distributions of their data. The max\nstats target is 10000, but the default is 100. Increasing it even\njust to 500 or 1000 should help the planner significantly.\n\nrls\n\n-- \n:wq\n",
"msg_date": "Fri, 20 Jul 2012 14:33:03 -0700",
"msg_from": "Rosser Schwarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On Fri, Jul 20, 2012 at 6:27 PM, Ioannis Anagnostopoulos\n<[email protected]> wrote:\n> On 20/07/2012 22:23, Claudio Freire wrote:\n>>\n>> On Fri, Jul 20, 2012 at 6:19 PM, Ioannis Anagnostopoulos\n>> <[email protected]> wrote:\n>>>\n>>> \" -> Nested Loop (cost=0.00..20942.93 rows=53 width=144) (actual\n>>> time=62.174..17783236.718 rows=387105 loops=1)\"\n>>> \" Join Filter: (feed_all_y2012m07.message_copies.msg_id =\n>>> feed_all_y2012m07.ship_pos_messages.msg_id)\"\n>>> \" -> Append (cost=0.00..19057.93 rows=53 width=33) (actual\n>>> time=62.124..5486473.545 rows=387524 loops=1)\"\n>>\n>> Misestimated row counts... did you try running an analyze, or upping\n>> statistic targets?\n>\n> I have run analyse every so often. I think the problem is that as I get 16K\n> new rows every minutes, the \"stats\" are always out... Possible?\n\n\nLooking at this:\n\n\" -> Index Scan using\nidx_message_copies_wk2_date_src_pos_partial on message_copies_wk2\nmessage_copies (cost=0.00..19057.93 rows=52 width=32) (actual\ntime=62.124..5486270.845 rows=387524 loops=1)\"\n\" Index Cond: ((date_trunc('day'::text,\nmsg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone)\nAND (src_id = 1))\"\n\" Filter: ((date_part('day'::text,\nmsg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text\n|| (pos_georef2)::text) || (pos_georef3)::text) ||\n(pos_georef4)::text) IS NULL)) AND (((((pos_georef1)::text ||\n(pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text)\n<> ''::text))\"\n\nIt's very possible.\n\nI think pg 9.1 had a fix for that, but I'm not sure it will help in\nyour case, I'd have to know what that index looks like.\n",
"msg_date": "Fri, 20 Jul 2012 18:33:58 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 20/07/2012 22:33, Claudio Freire wrote:\n> On Fri, Jul 20, 2012 at 6:27 PM, Ioannis Anagnostopoulos\n> <[email protected]> wrote:\n>> On 20/07/2012 22:23, Claudio Freire wrote:\n>>> On Fri, Jul 20, 2012 at 6:19 PM, Ioannis Anagnostopoulos\n>>> <[email protected]> wrote:\n>>>> \" -> Nested Loop (cost=0.00..20942.93 rows=53 width=144) (actual\n>>>> time=62.174..17783236.718 rows=387105 loops=1)\"\n>>>> \" Join Filter: (feed_all_y2012m07.message_copies.msg_id =\n>>>> feed_all_y2012m07.ship_pos_messages.msg_id)\"\n>>>> \" -> Append (cost=0.00..19057.93 rows=53 width=33) (actual\n>>>> time=62.124..5486473.545 rows=387524 loops=1)\"\n>>> Misestimated row counts... did you try running an analyze, or upping\n>>> statistic targets?\n>> I have run analyse every so often. I think the problem is that as I get 16K\n>> new rows every minutes, the \"stats\" are always out... Possible?\n>\n> Looking at this:\n>\n> \" -> Index Scan using\n> idx_message_copies_wk2_date_src_pos_partial on message_copies_wk2\n> message_copies (cost=0.00..19057.93 rows=52 width=32) (actual\n> time=62.124..5486270.845 rows=387524 loops=1)\"\n> \" Index Cond: ((date_trunc('day'::text,\n> msg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone)\n> AND (src_id = 1))\"\n> \" Filter: ((date_part('day'::text,\n> msg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text\n> || (pos_georef2)::text) || (pos_georef3)::text) ||\n> (pos_georef4)::text) IS NULL)) AND (((((pos_georef1)::text ||\n> (pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text)\n> <> ''::text))\"\n>\n> It's very possible.\n>\n> I think pg 9.1 had a fix for that, but I'm not sure it will help in\n> your case, I'd have to know what that index looks like.\nHere is the index:\n\nCREATE INDEX idx_message_copies_wk2_date_src_pos_partial\n ON feed_all_y2012m07.message_copies_wk2\n USING btree\n (date_trunc('day'::text, msg_date_rec), src_id, (((pos_georef1::text \n|| pos_georef2::text) || pos_georef3::text) || pos_georef4::text))\nTABLESPACE archive\n WHERE (((pos_georef1::text || pos_georef2::text) || \npos_georef3::text) || pos_georef4::text) IS NOT NULL OR NOT \n(((pos_georef1::text || pos_georef2::text) || pos_georef3::text) || \npos_georef4::text) = ''::text;\n\n\n\n\n\n\nOn 20/07/2012 22:33, Claudio Freire\n wrote:\n\n\nOn Fri, Jul 20, 2012 at 6:27 PM, Ioannis Anagnostopoulos\n<[email protected]> wrote:\n\n\nOn 20/07/2012 22:23, Claudio Freire wrote:\n\n\n\nOn Fri, Jul 20, 2012 at 6:19 PM, Ioannis Anagnostopoulos\n<[email protected]> wrote:\n\n\n\n\" -> Nested Loop (cost=0.00..20942.93 rows=53 width=144) (actual\ntime=62.174..17783236.718 rows=387105 loops=1)\"\n\" Join Filter: (feed_all_y2012m07.message_copies.msg_id =\nfeed_all_y2012m07.ship_pos_messages.msg_id)\"\n\" -> Append (cost=0.00..19057.93 rows=53 width=33) (actual\ntime=62.124..5486473.545 rows=387524 loops=1)\"\n\n\n\nMisestimated row counts... did you try running an analyze, or upping\nstatistic targets?\n\n\n\nI have run analyse every so often. I think the problem is that as I get 16K\nnew rows every minutes, the \"stats\" are always out... Possible?\n\n\n\n\nLooking at this:\n\n\" -> Index Scan using\nidx_message_copies_wk2_date_src_pos_partial on message_copies_wk2\nmessage_copies (cost=0.00..19057.93 rows=52 width=32) (actual\ntime=62.124..5486270.845 rows=387524 loops=1)\"\n\" Index Cond: ((date_trunc('day'::text,\nmsg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone)\nAND (src_id = 1))\"\n\" Filter: ((date_part('day'::text,\nmsg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text\n|| (pos_georef2)::text) || (pos_georef3)::text) ||\n(pos_georef4)::text) IS NULL)) AND (((((pos_georef1)::text ||\n(pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text)\n<> ''::text))\"\n\nIt's very possible.\n\nI think pg 9.1 had a fix for that, but I'm not sure it will help in\nyour case, I'd have to know what that index looks like.\n\n\n Here is the index:\n\nCREATE INDEX\n idx_message_copies_wk2_date_src_pos_partial\n ON feed_all_y2012m07.message_copies_wk2\n USING btree\n (date_trunc('day'::text, msg_date_rec), src_id,\n (((pos_georef1::text || pos_georef2::text) || pos_georef3::text)\n || pos_georef4::text))\n TABLESPACE archive\n WHERE (((pos_georef1::text || pos_georef2::text) ||\n pos_georef3::text) || pos_georef4::text) IS NOT NULL OR NOT\n (((pos_georef1::text || pos_georef2::text) || pos_georef3::text)\n || pos_georef4::text) = ''::text;",
"msg_date": "Fri, 20 Jul 2012 22:52:21 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 20/07/2012 22:33, Rosser Schwarz wrote:\n> On Fri, Jul 20, 2012 at 2:27 PM, Ioannis Anagnostopoulos\n> <[email protected]> wrote:\n>> On 20/07/2012 22:23, Claudio Freire wrote:\n>>> Misestimated row counts... did you try running an analyze, or upping\n>>> statistic targets?\n>> I have run analyse every so often. I think the problem is that as I get 16K\n>> new rows every minutes, the \"stats\" are always out... Possible?\n> It may not help much with any skew in your data that results from\n> divergent data appearing, but you can update the statistics targets\n> for those columns and analyze again, and the planner should have much\n> better information about the distributions of their data. The max\n> stats target is 10000, but the default is 100. Increasing it even\n> just to 500 or 1000 should help the planner significantly.\n>\n> rls\n>\nI suppose that this is some kind of postgres.conf tweak?\n",
"msg_date": "Fri, 20 Jul 2012 22:53:46 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 20/07/2012 22:53, Ioannis Anagnostopoulos wrote:\n> On 20/07/2012 22:33, Rosser Schwarz wrote:\n>> On Fri, Jul 20, 2012 at 2:27 PM, Ioannis Anagnostopoulos\n>> <[email protected]> wrote:\n>>> On 20/07/2012 22:23, Claudio Freire wrote:\n>>>> Misestimated row counts... did you try running an analyze, or upping\n>>>> statistic targets?\n>>> I have run analyse every so often. I think the problem is that as I \n>>> get 16K\n>>> new rows every minutes, the \"stats\" are always out... Possible?\n>> It may not help much with any skew in your data that results from\n>> divergent data appearing, but you can update the statistics targets\n>> for those columns and analyze again, and the planner should have much\n>> better information about the distributions of their data. The max\n>> stats target is 10000, but the default is 100. Increasing it even\n>> just to 500 or 1000 should help the planner significantly.\n>>\n>> rls\n>>\n> I suppose that this is some kind of postgres.conf tweak?\n>\nOn this Ubuntu installation the default_statistics_target = 1000 and not \n100. Do you think that this might be an issue?\n",
"msg_date": "Fri, 20 Jul 2012 23:19:15 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> Looking at this:\n\n> \" -> Index Scan using\n> idx_message_copies_wk2_date_src_pos_partial on message_copies_wk2\n> message_copies (cost=0.00..19057.93 rows=52 width=32) (actual\n> time=62.124..5486270.845 rows=387524 loops=1)\"\n> \" Index Cond: ((date_trunc('day'::text,\n> msg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone)\n> AND (src_id = 1))\"\n> \" Filter: ((date_part('day'::text,\n> msg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text\n> || (pos_georef2)::text) || (pos_georef3)::text) ||\n> (pos_georef4)::text) IS NULL)) AND (((((pos_georef1)::text ||\n> (pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text)\n> <> ''::text))\"\n\nI think the real problem is that the planner has no hope of doing\nanything very accurate with such an unwieldy filter condition. I'd look\nat ways of making the filter conditions simpler, perhaps by recasting\nthe data representation. In particular, that's a horridly bad way of\nasking whether some columns are empty, which I gather is the intent.\nIf you really want to do it just like that, creating an index on the\nconcatenation expression would guide ANALYZE to collect some stats about\nit, but it would probably be a lot more efficient to put together an AND\nor OR of tests on the individual columns.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Jul 2012 19:10:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 21/07/2012 00:10, Tom Lane wrote:\n> Claudio Freire <[email protected]> writes:\n>> Looking at this:\n>> \" -> Index Scan using\n>> idx_message_copies_wk2_date_src_pos_partial on message_copies_wk2\n>> message_copies (cost=0.00..19057.93 rows=52 width=32) (actual\n>> time=62.124..5486270.845 rows=387524 loops=1)\"\n>> \" Index Cond: ((date_trunc('day'::text,\n>> msg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone)\n>> AND (src_id = 1))\"\n>> \" Filter: ((date_part('day'::text,\n>> msg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text\n>> || (pos_georef2)::text) || (pos_georef3)::text) ||\n>> (pos_georef4)::text) IS NULL)) AND (((((pos_georef1)::text ||\n>> (pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text)\n>> <> ''::text))\"\n> I think the real problem is that the planner has no hope of doing\n> anything very accurate with such an unwieldy filter condition. I'd look\n> at ways of making the filter conditions simpler, perhaps by recasting\n> the data representation. In particular, that's a horridly bad way of\n> asking whether some columns are empty, which I gather is the intent.\n> If you really want to do it just like that, creating an index on the\n> concatenation expression would guide ANALYZE to collect some stats about\n> it, but it would probably be a lot more efficient to put together an AND\n> or OR of tests on the individual columns.\n>\n> \t\t\tregards, tom lane\nSo what you suggest is to forget all together the concatenation of the \ngeoref1/2/3/4 and instead alter my query with something like:\n\ngeoref1 is not null and not georeg1 = ''....etc for georef2 3 and 4\n\nThat would require to alter my index and have the four georef columns \nseparately in it and not as a concatenation and so on for the partial \nindex part. And a final thing, you seem to imply that the indexes are \nused by the analyser to collect statistics even if they are not used. So \nan index serves not only as a way to speed up targeted queries but also \nto provide better statistics to the analyzer?\n\nKind Regards\nYiannis\n",
"msg_date": "Sat, 21 Jul 2012 00:56:34 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 07/21/2012 06:19 AM, Ioannis Anagnostopoulos wrote:\n\n> On this Ubuntu installation the default_statistics_target = 1000 and \n> not 100. Do you think that this might be an issue?\n\nNope. You should generally avoid setting default_statistics_target too \nhigh anyway; leave it where it is and use ALTER TABLE ... ALTER COLUMN \n... SET STATISTICS to raise the targets on columns where you're seeing \nbad statistics estimates.\n\nhttp://www.postgresql.org/docs/9.1/static/sql-altertable.html\n\nAlso make sure autovaccum is running frequently so it keeps the stats up \nto date.\n\n--\nCraig Ringer\n\n\n\n\n\n\n\n\nOn 07/21/2012 06:19 AM, Ioannis\n Anagnostopoulos wrote:\n\n\nOn\n this Ubuntu installation the default_statistics_target = 1000 and\n not 100. Do you think that this might be an issue?\n \n\n\n Nope. You should generally avoid setting default_statistics_target\n too high anyway; leave it where it is and use ALTER TABLE ... ALTER\n COLUMN ... SET STATISTICS to raise the targets on columns where\n you're seeing bad statistics estimates.\n\n\nhttp://www.postgresql.org/docs/9.1/static/sql-altertable.html\n\n Also make sure autovaccum is running frequently so it keeps the\n stats up to date.\n\n --\n Craig Ringer",
"msg_date": "Sat, 21 Jul 2012 16:02:21 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "Hello,\nisn't the first test superfluous here ?\n\n>\twhere extract('day' from message_copies.msg_date_rec) = 17\n>\tand date_trunc('day', message_copies.msg_date_rec) = '2012-07-17'\n\n\n> Here is the index:\n> \n> CREATE INDEX idx_message_copies_wk2_date_src_pos_partial\n> ON feed_all_y2012m07.message_copies_wk2\n> USING btree\n> (date_trunc('day'::text, msg_date_rec),\n> src_id,\n> (((pos_georef1::text || pos_georef2::text) || pos_georef3::text) || pos_georef4::text))\n> TABLESPACE archive\n> WHERE (((pos_georef1::text || pos_georef2::text) || pos_georef3::text) || pos_georef4::text) IS NOT NULL \n> OR NOT (((pos_georef1::text || pos_georef2::text) || pos_georef3::text) || pos_georef4::text) = ''::text;\n\n\nthe georef test can be simplified using coalesce:\n\n> and (message_copies.pos_georef1 || message_copies.pos_georef2 || message_copies.pos_georef3 || message_copies.pos_georef4) <> ''\n> and not (message_copies.pos_georef1 || message_copies.pos_georef2 || message_copies.pos_georef3 || message_copies.pos_georef4) is null\n =>\n and coaesce ( \n (message_copies.pos_georef1 || message_copies.pos_georef2 || message_copies.pos_georef3 || message_copies.pos_georef4), \n '') <> ''\n \nIn order to avoid this test at query time you might add a boolean column message_copies.pos.has_georef,\nand keep it up to date with a before insert or update trigger. This will allow to shorten your index definition and simplify the planner task a little bit.\nMoreover it will fasten your query in cases when the index don't get used.\n\nAs Tom already mentioned it, it may make sense not to concatenate the georef within the index, but keep them separated, or even keep them in different indexes.\nWhich is the best depend on the other queries running against this table\n \nHTH,\n\nMarc Mamin\n \n\n\n-----Original Message-----\nFrom: [email protected] on behalf of Ioannis Anagnostopoulos\nSent: Sat 7/21/2012 1:56 AM\nTo: Tom Lane\nCc: Claudio Freire; [email protected]\nSubject: Re: [PERFORM] A very long running query....\n \nOn 21/07/2012 00:10, Tom Lane wrote:\n> Claudio Freire <[email protected]> writes:\n>> Looking at this:\n>> \" -> Index Scan using\n>> idx_message_copies_wk2_date_src_pos_partial on message_copies_wk2\n>> message_copies (cost=0.00..19057.93 rows=52 width=32) (actual\n>> time=62.124..5486270.845 rows=387524 loops=1)\"\n>> \" Index Cond: ((date_trunc('day'::text,\n>> msg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone)\n>> AND (src_id = 1))\"\n>> \" Filter: ((date_part('day'::text,\n>> msg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text\n>> || (pos_georef2)::text) || (pos_georef3)::text) ||\n>> (pos_georef4)::text) IS NULL)) AND (((((pos_georef1)::text ||\n>> (pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text)\n>> <> ''::text))\"\n> I think the real problem is that the planner has no hope of doing\n> anything very accurate with such an unwieldy filter condition. I'd look\n> at ways of making the filter conditions simpler, perhaps by recasting\n> the data representation. In particular, that's a horridly bad way of\n> asking whether some columns are empty, which I gather is the intent.\n> If you really want to do it just like that, creating an index on the\n> concatenation expression would guide ANALYZE to collect some stats about\n> it, but it would probably be a lot more efficient to put together an AND\n> or OR of tests on the individual columns.\n>\n> \t\t\tregards, tom lane\nSo what you suggest is to forget all together the concatenation of the \ngeoref1/2/3/4 and instead alter my query with something like:\n\ngeoref1 is not null and not georeg1 = ''....etc for georef2 3 and 4\n\nThat would require to alter my index and have the four georef columns \nseparately in it and not as a concatenation and so on for the partial \nindex part. And a final thing, you seem to imply that the indexes are \nused by the analyser to collect statistics even if they are not used. So \nan index serves not only as a way to speed up targeted queries but also \nto provide better statistics to the analyzer?\n\nKind Regards\nYiannis\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\nRE: [PERFORM] A very long running query....\n\n\n\n\n\nHello,\nisn't the first test superfluous here ?\n\n> where extract('day' from message_copies.msg_date_rec) = 17\n> and date_trunc('day', message_copies.msg_date_rec) = '2012-07-17'\n\n\n> Here is the index:\n>\n> CREATE INDEX idx_message_copies_wk2_date_src_pos_partial\n> ON feed_all_y2012m07.message_copies_wk2\n> USING btree\n> (date_trunc('day'::text, msg_date_rec),\n> src_id,\n> (((pos_georef1::text || pos_georef2::text) || pos_georef3::text) || pos_georef4::text))\n> TABLESPACE archive\n> WHERE (((pos_georef1::text || pos_georef2::text) || pos_georef3::text) || pos_georef4::text) IS NOT NULL\n> OR NOT (((pos_georef1::text || pos_georef2::text) || pos_georef3::text) || pos_georef4::text) = ''::text;\n\n\nthe georef test can be simplified using coalesce:\n\n> and (message_copies.pos_georef1 || message_copies.pos_georef2 || message_copies.pos_georef3 || message_copies.pos_georef4) <> ''\n> and not (message_copies.pos_georef1 || message_copies.pos_georef2 || message_copies.pos_georef3 || message_copies.pos_georef4) is null\n =>\n and coaesce (\n (message_copies.pos_georef1 || message_copies.pos_georef2 || message_copies.pos_georef3 || message_copies.pos_georef4),\n '') <> ''\n \nIn order to avoid this test at query time you might add a boolean column message_copies.pos.has_georef,\nand keep it up to date with a before insert or update trigger. This will allow to shorten your index definition and simplify the planner task a little bit.\nMoreover it will fasten your query in cases when the index don't get used.\n\nAs Tom already mentioned it, it may make sense not to concatenate the georef within the index, but keep them separated, or even keep them in different indexes.\nWhich is the best depend on the other queries running against this table\n \nHTH,\n\nMarc Mamin\n \n\n\n-----Original Message-----\nFrom: [email protected] on behalf of Ioannis Anagnostopoulos\nSent: Sat 7/21/2012 1:56 AM\nTo: Tom Lane\nCc: Claudio Freire; [email protected]\nSubject: Re: [PERFORM] A very long running query....\n\nOn 21/07/2012 00:10, Tom Lane wrote:\n> Claudio Freire <[email protected]> writes:\n>> Looking at this:\n>> \" -> Index Scan using\n>> idx_message_copies_wk2_date_src_pos_partial on message_copies_wk2\n>> message_copies (cost=0.00..19057.93 rows=52 width=32) (actual\n>> time=62.124..5486270.845 rows=387524 loops=1)\"\n>> \" Index Cond: ((date_trunc('day'::text,\n>> msg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone)\n>> AND (src_id = 1))\"\n>> \" Filter: ((date_part('day'::text,\n>> msg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text\n>> || (pos_georef2)::text) || (pos_georef3)::text) ||\n>> (pos_georef4)::text) IS NULL)) AND (((((pos_georef1)::text ||\n>> (pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text)\n>> <> ''::text))\"\n> I think the real problem is that the planner has no hope of doing\n> anything very accurate with such an unwieldy filter condition. I'd look\n> at ways of making the filter conditions simpler, perhaps by recasting\n> the data representation. In particular, that's a horridly bad way of\n> asking whether some columns are empty, which I gather is the intent.\n> If you really want to do it just like that, creating an index on the\n> concatenation expression would guide ANALYZE to collect some stats about\n> it, but it would probably be a lot more efficient to put together an AND\n> or OR of tests on the individual columns.\n>\n> regards, tom lane\nSo what you suggest is to forget all together the concatenation of the\ngeoref1/2/3/4 and instead alter my query with something like:\n\ngeoref1 is not null and not georeg1 = ''....etc for georef2 3 and 4\n\nThat would require to alter my index and have the four georef columns\nseparately in it and not as a concatenation and so on for the partial\nindex part. And a final thing, you seem to imply that the indexes are\nused by the analyser to collect statistics even if they are not used. So\nan index serves not only as a way to speed up targeted queries but also\nto provide better statistics to the analyzer?\n\nKind Regards\nYiannis\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 21 Jul 2012 11:16:16 +0200",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 21/07/2012 10:16, Marc Mamin wrote:\n> RE: [PERFORM] A very long running query....\n>\n> Hello,\n> isn't the first test superfluous here ?\n>\n> > where extract('day' from message_copies.msg_date_rec) = 17\n> > and date_trunc('day', message_copies.msg_date_rec) = '2012-07-17'\n>\n>\n> > Here is the index:\n> >\n> > CREATE INDEX idx_message_copies_wk2_date_src_pos_partial\n> > ON feed_all_y2012m07.message_copies_wk2\n> > USING btree\n> > (date_trunc('day'::text, msg_date_rec),\n> > src_id,\n> > (((pos_georef1::text || pos_georef2::text) || pos_georef3::text) \n> || pos_georef4::text))\n> > TABLESPACE archive\n> > WHERE (((pos_georef1::text || pos_georef2::text) || \n> pos_georef3::text) || pos_georef4::text) IS NOT NULL\n> > OR NOT (((pos_georef1::text || pos_georef2::text) || \n> pos_georef3::text) || pos_georef4::text) = ''::text;\n>\n>\n> the georef test can be simplified using coalesce:\n>\n> > and (message_copies.pos_georef1 || message_copies.pos_georef2 \n> || message_copies.pos_georef3 || message_copies.pos_georef4) <> ''\n> > and not (message_copies.pos_georef1 || message_copies.pos_georef2 \n> || message_copies.pos_georef3 || message_copies.pos_georef4) is null\n> =>\n> and coaesce (\n> (message_copies.pos_georef1 || message_copies.pos_georef2 || \n> message_copies.pos_georef3 || message_copies.pos_georef4),\n> '') <> ''\n>\n> In order to avoid this test at query time you might add a boolean \n> column message_copies.pos.has_georef,\n> and keep it up to date with a before insert or update trigger. This \n> will allow to shorten your index definition and simplify the planner \n> task a little bit.\n> Moreover it will fasten your query in cases when the index don't get used.\n>\n> As Tom already mentioned it, it may make sense not to concatenate the \n> georef within the index, but keep them separated, or even keep them in \n> different indexes.\n> Which is the best depend on the other queries running against this table\n>\n> HTH,\n>\n> Marc Mamin\n>\n>\n>\n> -----Original Message-----\n> From: [email protected] on behalf of Ioannis \n> Anagnostopoulos\n> Sent: Sat 7/21/2012 1:56 AM\n> To: Tom Lane\n> Cc: Claudio Freire; [email protected]\n> Subject: Re: [PERFORM] A very long running query....\n>\n> On 21/07/2012 00:10, Tom Lane wrote:\n> > Claudio Freire <[email protected]> writes:\n> >> Looking at this:\n> >> \" -> Index Scan using\n> >> idx_message_copies_wk2_date_src_pos_partial on message_copies_wk2\n> >> message_copies (cost=0.00..19057.93 rows=52 width=32) (actual\n> >> time=62.124..5486270.845 rows=387524 loops=1)\"\n> >> \" Index Cond: ((date_trunc('day'::text,\n> >> msg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone)\n> >> AND (src_id = 1))\"\n> >> \" Filter: ((date_part('day'::text,\n> >> msg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text\n> >> || (pos_georef2)::text) || (pos_georef3)::text) ||\n> >> (pos_georef4)::text) IS NULL)) AND (((((pos_georef1)::text ||\n> >> (pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text)\n> >> <> ''::text))\"\n> > I think the real problem is that the planner has no hope of doing\n> > anything very accurate with such an unwieldy filter condition. I'd look\n> > at ways of making the filter conditions simpler, perhaps by recasting\n> > the data representation. In particular, that's a horridly bad way of\n> > asking whether some columns are empty, which I gather is the intent.\n> > If you really want to do it just like that, creating an index on the\n> > concatenation expression would guide ANALYZE to collect some stats about\n> > it, but it would probably be a lot more efficient to put together an AND\n> > or OR of tests on the individual columns.\n> >\n> > regards, tom lane\n> So what you suggest is to forget all together the concatenation of the\n> georef1/2/3/4 and instead alter my query with something like:\n>\n> georef1 is not null and not georeg1 = ''....etc for georef2 3 and 4\n>\n> That would require to alter my index and have the four georef columns\n> separately in it and not as a concatenation and so on for the partial\n> index part. And a final thing, you seem to imply that the indexes are\n> used by the analyser to collect statistics even if they are not used. So\n> an index serves not only as a way to speed up targeted queries but also\n> to provide better statistics to the analyzer?\n>\n> Kind Regards\n> Yiannis\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\nNo because it is used to select a partition. Otherwise it will go \nthrough the whole hierarchy...\n\n\n\n\n\n\nOn 21/07/2012 10:16, Marc Mamin wrote:\n\n\n\n\nRE: [PERFORM] A very long running query....\n\n\n\nHello,\n isn't the first test superfluous here ?\n\n > where extract('day' from\n message_copies.msg_date_rec) = 17\n > and date_trunc('day', \n message_copies.msg_date_rec) = '2012-07-17'\n\n\n > Here is the index:\n >\n > CREATE INDEX idx_message_copies_wk2_date_src_pos_partial\n > ON feed_all_y2012m07.message_copies_wk2\n > USING btree\n > (date_trunc('day'::text, msg_date_rec),\n > src_id,\n > (((pos_georef1::text || pos_georef2::text) ||\n pos_georef3::text) || pos_georef4::text))\n > TABLESPACE archive\n > WHERE (((pos_georef1::text || pos_georef2::text) ||\n pos_georef3::text) || pos_georef4::text) IS NOT NULL\n > OR NOT (((pos_georef1::text || pos_georef2::text) ||\n pos_georef3::text) || pos_georef4::text) = ''::text;\n\n\n the georef test can be simplified using coalesce:\n\n > and (message_copies.pos_georef1 ||\n message_copies.pos_georef2 || message_copies.pos_georef3 ||\n message_copies.pos_georef4) <> ''\n > and not (message_copies.pos_georef1 ||\n message_copies.pos_georef2 || message_copies.pos_georef3 ||\n message_copies.pos_georef4) is null\n =>\n and coaesce (\n (message_copies.pos_georef1 || message_copies.pos_georef2\n || message_copies.pos_georef3 || message_copies.pos_georef4),\n '') <> ''\n \n In order to avoid this test at query time you might add a\n boolean column message_copies.pos.has_georef,\n and keep it up to date with a before insert or update\n trigger. This will allow to shorten your index definition and\n simplify the planner task a little bit.\n Moreover it will fasten your query in cases when the index\n don't get used.\n\n As Tom already mentioned it, it may make sense not to\n concatenate the georef within the index, but keep them\n separated, or even keep them in different indexes.\n Which is the best depend on the other queries running against\n this table\n \n HTH,\n\n Marc Mamin\n \n\n\n -----Original Message-----\n From: [email protected] on behalf of\n Ioannis Anagnostopoulos\n Sent: Sat 7/21/2012 1:56 AM\n To: Tom Lane\n Cc: Claudio Freire; [email protected]\n Subject: Re: [PERFORM] A very long running query....\n\n On 21/07/2012 00:10, Tom Lane wrote:\n > Claudio Freire <[email protected]> writes:\n >> Looking at this:\n >> \" -> Index Scan using\n >> idx_message_copies_wk2_date_src_pos_partial on\n message_copies_wk2\n >> message_copies (cost=0.00..19057.93 rows=52\n width=32) (actual\n >> time=62.124..5486270.845 rows=387524 loops=1)\"\n >> \" Index Cond:\n ((date_trunc('day'::text,\n >> msg_date_rec) = '2012-07-17 00:00:00'::timestamp\n without time zone)\n >> AND (src_id = 1))\"\n >> \" Filter:\n ((date_part('day'::text,\n >> msg_date_rec) = 17::double precision) AND (NOT\n (((((pos_georef1)::text\n >> || (pos_georef2)::text) || (pos_georef3)::text) ||\n >> (pos_georef4)::text) IS NULL)) AND\n (((((pos_georef1)::text ||\n >> (pos_georef2)::text) || (pos_georef3)::text) ||\n (pos_georef4)::text)\n >> <> ''::text))\"\n > I think the real problem is that the planner has no hope\n of doing\n > anything very accurate with such an unwieldy filter\n condition. I'd look\n > at ways of making the filter conditions simpler, perhaps\n by recasting\n > the data representation. In particular, that's a\n horridly bad way of\n > asking whether some columns are empty, which I gather is\n the intent.\n > If you really want to do it just like that, creating an\n index on the\n > concatenation expression would guide ANALYZE to collect\n some stats about\n > it, but it would probably be a lot more efficient to put\n together an AND\n > or OR of tests on the individual columns.\n >\n > regards, tom lane\n So what you suggest is to forget all together the\n concatenation of the\n georef1/2/3/4 and instead alter my query with something like:\n\n georef1 is not null and not georeg1 = ''....etc for georef2 3\n and 4\n\n That would require to alter my index and have the four georef\n columns\n separately in it and not as a concatenation and so on for the\n partial\n index part. And a final thing, you seem to imply that the\n indexes are\n used by the analyser to collect statistics even if they are\n not used. So\n an index serves not only as a way to speed up targeted queries\n but also\n to provide better statistics to the analyzer?\n\n Kind Regards\n Yiannis\n\n --\n Sent via pgsql-performance mailing list\n ([email protected])\n To make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n No because it is used to select a partition. Otherwise it will go\n through the whole hierarchy...",
"msg_date": "Sat, 21 Jul 2012 10:22:09 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "[ Please try to trim quotes when replying. People don't want to re-read\n the entire thread in every message. ]\n\nIoannis Anagnostopoulos <[email protected]> writes:\n> On 21/07/2012 10:16, Marc Mamin wrote:\n>> isn't the first test superfluous here ?\n>> \n>>> where extract('day' from message_copies.msg_date_rec) = 17\n>>> and date_trunc('day', message_copies.msg_date_rec) = '2012-07-17'\n\n> No because it is used to select a partition. Otherwise it will go \n> through the whole hierarchy...\n\nYou're using extract(day...) to define partitions? You might want to\nrethink that. The planner has got absolutely no intelligence about\nthe behavior of extract, and in particular doesn't realize that the\ndate_trunc condition implies the extract condition; so that's another\npart of the cause of the estimation error here.\n\nWhat's usually recommended for partitioning is simple equality or\nrange constraints, such as \"msg_date_rec >= 'date1' AND\nmsg_date_rec < 'date2'\", which the planner does have a fair amount\nof intelligence about.\n\nNow, you can generalize that to equality or range constraints using\nan expression; for instance there'd be no problem to partition on\ndate_trunc('day', msg_date_rec) rather than msg_date_rec directly,\nso long as your queries always use that same expression. But you\nshould not expect that the planner can deduce very much about the\ncorrelations between results of different functions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Jul 2012 12:58:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 21/07/2012 17:58, Tom Lane wrote:\n> [ Please try to trim quotes when replying. People don't want to re-read\n> the entire thread in every message. ]\n>\n> Ioannis Anagnostopoulos <[email protected]> writes:\n>> On 21/07/2012 10:16, Marc Mamin wrote:\n>>> isn't the first test superfluous here ?\n>>>\n>>>> where extract('day' from message_copies.msg_date_rec) = 17\n>>>> and date_trunc('day', message_copies.msg_date_rec) = '2012-07-17'\n>> No because it is used to select a partition. Otherwise it will go\n>> through the whole hierarchy...\n> You're using extract(day...) to define partitions? You might want to\n> rethink that. The planner has got absolutely no intelligence about\n> the behavior of extract, and in particular doesn't realize that the\n> date_trunc condition implies the extract condition; so that's another\n> part of the cause of the estimation error here.\n>\n> What's usually recommended for partitioning is simple equality or\n> range constraints, such as \"msg_date_rec >= 'date1' AND\n> msg_date_rec < 'date2'\", which the planner does have a fair amount\n> of intelligence about.\n>\n> Now, you can generalize that to equality or range constraints using\n> an expression; for instance there'd be no problem to partition on\n> date_trunc('day', msg_date_rec) rather than msg_date_rec directly,\n> so long as your queries always use that same expression. But you\n> should not expect that the planner can deduce very much about the\n> correlations between results of different functions.\n>\n> \t\t\tregards, tom lane\nI think you got this wrong here. If you see the query again you will see \nthat I do use equality. The problem is that my \"equality\" occurs\nby extracting the date from the msg_date_rec column. To put it in other \nwords, for not using the \"extract\" I should have an additional\ncolumn only with the \"date\" number to perform the equality. Don't you \nfeel that this is not right since I have the actual date? The constrain\nwithin the table that defines the partition is as follows:\n\nCONSTRAINT message_copies_wk0_date CHECK (date_part('day'::text, \nmsg_date_rec) >= 1::double precision AND date_part('day'::text, \nmsg_date_rec) <= 7::double precision)\n\nI see not problem at this. The planner gets it right and \"hits\" the \ncorrect table every time. So unless if there is a technique here that I \ncompletely miss,\nwhere is the problem?\n\n\nRegards\nYiannis\n\n\n\n\n\n\nOn 21/07/2012 17:58, Tom Lane wrote:\n\n\n[ Please try to trim quotes when replying. People don't want to re-read\n the entire thread in every message. ]\n\nIoannis Anagnostopoulos <[email protected]> writes:\n\n\nOn 21/07/2012 10:16, Marc Mamin wrote:\n\n\nisn't the first test superfluous here ?\n\n\n\nwhere extract('day' from message_copies.msg_date_rec) = 17\nand date_trunc('day', message_copies.msg_date_rec) = '2012-07-17'\n\n\n\n\n\n\n\nNo because it is used to select a partition. Otherwise it will go \nthrough the whole hierarchy...\n\n\n\nYou're using extract(day...) to define partitions? You might want to\nrethink that. The planner has got absolutely no intelligence about\nthe behavior of extract, and in particular doesn't realize that the\ndate_trunc condition implies the extract condition; so that's another\npart of the cause of the estimation error here.\n\nWhat's usually recommended for partitioning is simple equality or\nrange constraints, such as \"msg_date_rec >= 'date1' AND\nmsg_date_rec < 'date2'\", which the planner does have a fair amount\nof intelligence about.\n\nNow, you can generalize that to equality or range constraints using\nan expression; for instance there'd be no problem to partition on\ndate_trunc('day', msg_date_rec) rather than msg_date_rec directly,\nso long as your queries always use that same expression. But you\nshould not expect that the planner can deduce very much about the\ncorrelations between results of different functions.\n\n\t\t\tregards, tom lane\n\n\n I think you got this wrong here. If you see the query again you will\n see that I do use equality. The problem is that my \"equality\" occurs\n by extracting the date from the msg_date_rec column. To put it in\n other words, for not using the \"extract\" I should have an additional\n column only with the \"date\" number to perform the equality. Don't\n you feel that this is not right since I have the actual date? The\n constrain\n within the table that defines the partition is as follows:\n\nCONSTRAINT message_copies_wk0_date CHECK\n (date_part('day'::text, msg_date_rec) >= 1::double precision\n AND date_part('day'::text, msg_date_rec) <= 7::double\n precision)\n\n I see not problem at this. The planner gets it right and \"hits\" the\n correct table every time. So unless if there is a technique here\n that I completely miss, \n where is the problem?\n\n\n Regards\n Yiannis",
"msg_date": "Sat, 21 Jul 2012 18:42:31 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 21/07/2012 00:10, Tom Lane wrote:\n> Claudio Freire <[email protected]> writes:\n>> Looking at this:\n>> \" -> Index Scan using\n>> idx_message_copies_wk2_date_src_pos_partial on message_copies_wk2\n>> message_copies (cost=0.00..19057.93 rows=52 width=32) (actual\n>> time=62.124..5486270.845 rows=387524 loops=1)\"\n>> \" Index Cond: ((date_trunc('day'::text,\n>> msg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone)\n>> AND (src_id = 1))\"\n>> \" Filter: ((date_part('day'::text,\n>> msg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text\n>> || (pos_georef2)::text) || (pos_georef3)::text) ||\n>> (pos_georef4)::text) IS NULL)) AND (((((pos_georef1)::text ||\n>> (pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text)\n>> <> ''::text))\"\n> I think the real problem is that the planner has no hope of doing\n> anything very accurate with such an unwieldy filter condition. I'd look\n> at ways of making the filter conditions simpler, perhaps by recasting\n> the data representation. In particular, that's a horridly bad way of\n> asking whether some columns are empty, which I gather is the intent.\n> If you really want to do it just like that, creating an index on the\n> concatenation expression would guide ANALYZE to collect some stats about\n> it, but it would probably be a lot more efficient to put together an AND\n> or OR of tests on the individual columns.\n>\n> \t\t\tregards, tom lane\nOK regarding the index I use... I follow your second advice about \nefficiency with individual columns and changed it to:\n\nCREATE INDEX idx_message_copies_wk2_date_src_pos_partial\n ON feed_all_y2012m07.message_copies_wk2\n USING btree\n (date_trunc('day'::text, msg_date_rec), src_id, pos_georef1, \npos_georef2, pos_georef3, pos_georef4)\nTABLESPACE \"index\"\n WHERE\n pos_georef1 IS NOT NULL\n AND NOT pos_georef1::text = ''::text\n AND pos_georef2 IS NOT NULL\n AND NOT pos_georef2::text = ''::text\n AND pos_georef3 IS NOT NULL\n AND NOT pos_georef3::text = ''::text\n AND pos_georef4 IS NOT NULL\n AND NOT pos_georef4::text = ''::text;\n\nThe query has been changed as well as follows now:\n\nSELECT\n src_id,\n date_trunc('day', message_copies.msg_date_rec) as date_count,\n message_copies.pos_georef1,\n message_copies.pos_georef2,\n message_copies.pos_georef3,\n message_copies.pos_georef4,\n ais_server.array_accum(CASE WHEN msg_type BETWEEN 1 and 3 \nTHEN message_copies.msg_id END) as msgA_array,\n ais_server.array_accum(CASE WHEN msg_type = 18 THEN \nmessage_copies.msg_id END) as msgB_std_array,\n ais_server.array_accum(CASE WHEN msg_type = 19 THEN \nmessage_copies.msg_id END) as msgB_ext_array,\n uniq\n (\n ais_server.array_accum(CASE WHEN obj_type = 'SHIP_TYPE_A' \nTHEN obj_mmsi END)\n ) as mmsi_type_A_array,\n uniq\n (\n ais_server.array_accum(CASE WHEN obj_type = 'SHIP_TYPE_B' \nTHEN obj_mmsi END)\n ) as mmsi_type_B_array,\n avg(ship_speed) / 10.0 as avg_speed,\n avg(ship_heading) as avg_heading,\n avg(ship_course) / 10.0 as avg_course,\n ST_Multi(ST_Collect(ship_pos_messages.pos_point)) as geom\n from\n feed_all_y2012m07.message_copies join\n (feed_all_y2012m07.ship_pos_messages join \nais_server.ship_objects on (ship_pos_messages.obj_id = \nship_objects.obj_id))\n on (message_copies.msg_id = ship_pos_messages.msg_id)\n where\n extract('day' from message_copies.msg_date_rec) = 17\n and date_trunc('day', message_copies.msg_date_rec) = '2012-07-17'\n and message_copies.src_id = 5\n and not message_copies.pos_georef1 = '' and not \nmessage_copies.pos_georef2 = '' and not message_copies.pos_georef3 = '' \nand not message_copies.pos_georef4 = ''\n and message_copies.pos_georef1 is not null and \nmessage_copies.pos_georef2 is not null and message_copies.pos_georef3 is \nnot null and message_copies.pos_georef4 is not null\n and extract('day' from ship_pos_messages.msg_date_rec) = 17\n group by src_id, date_count, message_copies.pos_georef1, \nmessage_copies.pos_georef2, message_copies.pos_georef3, \nmessage_copies.pos_georef4;\n\nI am not sure that I can see an improvement, at least on src_id that \nhave lots of msg_id per day the query never returned even 5 hours later \nrunning \"exaplain analyze\". For smaller src_id\n(message wise) there might be some improvement or it was just the \nanalyse that I run. As I said the stats goes quickly out of scope \nbecause of the big number of updates. So it looks like that\nit is not the \"funny\" \"where\" concatenation or some kind of index \nconstruction problem. Which brings us back to the issue of the \n\"statistics_target\" on per column. My problem is that given the\nquery plan I provided you yesterday, I am not sure which columns \nstatistics_target to touch and what short of number to introduce. Is \nthere any rule of thumb?\n\nKind regards\nYiannis\n\n\n\n\n\n\nOn 21/07/2012 00:10, Tom Lane wrote:\n\n\nClaudio Freire <[email protected]> writes:\n\n\nLooking at this:\n\n\n\n\n\n\" -> Index Scan using\nidx_message_copies_wk2_date_src_pos_partial on message_copies_wk2\nmessage_copies (cost=0.00..19057.93 rows=52 width=32) (actual\ntime=62.124..5486270.845 rows=387524 loops=1)\"\n\" Index Cond: ((date_trunc('day'::text,\nmsg_date_rec) = '2012-07-17 00:00:00'::timestamp without time zone)\nAND (src_id = 1))\"\n\" Filter: ((date_part('day'::text,\nmsg_date_rec) = 17::double precision) AND (NOT (((((pos_georef1)::text\n|| (pos_georef2)::text) || (pos_georef3)::text) ||\n(pos_georef4)::text) IS NULL)) AND (((((pos_georef1)::text ||\n(pos_georef2)::text) || (pos_georef3)::text) || (pos_georef4)::text)\n<> ''::text))\"\n\n\n\nI think the real problem is that the planner has no hope of doing\nanything very accurate with such an unwieldy filter condition. I'd look\nat ways of making the filter conditions simpler, perhaps by recasting\nthe data representation. In particular, that's a horridly bad way of\nasking whether some columns are empty, which I gather is the intent.\nIf you really want to do it just like that, creating an index on the\nconcatenation expression would guide ANALYZE to collect some stats about\nit, but it would probably be a lot more efficient to put together an AND\nor OR of tests on the individual columns.\n\n\t\t\tregards, tom lane\n\n\n OK regarding the index I use... I follow your second advice about\n efficiency with individual columns and changed it to:\n\nCREATE INDEX idx_message_copies_wk2_date_src_pos_partial\n ON feed_all_y2012m07.message_copies_wk2\n USING btree\n (date_trunc('day'::text, msg_date_rec), src_id, pos_georef1,\n pos_georef2, pos_georef3, pos_georef4)\n TABLESPACE \"index\"\n WHERE \n pos_georef1 IS NOT NULL \n AND NOT pos_georef1::text = ''::text \n AND pos_georef2 IS NOT NULL \n AND NOT pos_georef2::text = ''::text \n AND pos_georef3 IS NOT NULL \n AND NOT pos_georef3::text = ''::text \n AND pos_georef4 IS NOT NULL \n AND NOT pos_georef4::text = ''::text;\n\n The query has been changed as well as follows now:\n\n SELECT \n src_id,\n date_trunc('day', message_copies.msg_date_rec) as\n date_count,\n message_copies.pos_georef1,\n message_copies.pos_georef2,\n message_copies.pos_georef3,\n message_copies.pos_georef4,\n ais_server.array_accum(CASE WHEN msg_type BETWEEN 1\n and 3 THEN message_copies.msg_id END) as msgA_array,\n ais_server.array_accum(CASE WHEN msg_type = 18 THEN\n message_copies.msg_id END) as msgB_std_array,\n ais_server.array_accum(CASE WHEN msg_type = 19 THEN\n message_copies.msg_id END) as msgB_ext_array,\n uniq\n (\n ais_server.array_accum(CASE WHEN obj_type =\n 'SHIP_TYPE_A' THEN obj_mmsi END)\n ) as mmsi_type_A_array,\n uniq\n (\n ais_server.array_accum(CASE WHEN obj_type =\n 'SHIP_TYPE_B' THEN obj_mmsi END)\n ) as mmsi_type_B_array,\n avg(ship_speed) / 10.0 as avg_speed,\n avg(ship_heading) as avg_heading,\n avg(ship_course) / 10.0 as avg_course,\n ST_Multi(ST_Collect(ship_pos_messages.pos_point)) as\n geom\n from \n feed_all_y2012m07.message_copies join \n (feed_all_y2012m07.ship_pos_messages join\n ais_server.ship_objects on (ship_pos_messages.obj_id =\n ship_objects.obj_id)) \n on (message_copies.msg_id =\n ship_pos_messages.msg_id)\n where \n extract('day' from message_copies.msg_date_rec) = 17\n and date_trunc('day', message_copies.msg_date_rec) =\n '2012-07-17'\n and message_copies.src_id = 5\n and not message_copies.pos_georef1 = '' and not\n message_copies.pos_georef2 = '' and not\n message_copies.pos_georef3 = '' and not\n message_copies.pos_georef4 = ''\n and message_copies.pos_georef1 is not null and \n message_copies.pos_georef2 is not null and\n message_copies.pos_georef3 is not null and\n message_copies.pos_georef4 is not null\n and extract('day' from ship_pos_messages.msg_date_rec) =\n 17 \n group by src_id, date_count, message_copies.pos_georef1,\n message_copies.pos_georef2, message_copies.pos_georef3,\n message_copies.pos_georef4;\n\n I am not sure that I can see an improvement, at least on src_id that\n have lots of msg_id per day the query never returned even 5 hours\n later running \"exaplain analyze\". For smaller src_id\n (message wise) there might be some improvement or it was just the\n analyse that I run. As I said the stats goes quickly out of scope\n because of the big number of updates. So it looks like that\n it is not the \"funny\" \"where\" concatenation or some kind of index\n construction problem. Which brings us back to the issue of the\n \"statistics_target\" on per column. My problem is that given the\n query plan I provided you yesterday, I am not sure which columns\n statistics_target to touch and what short of number to introduce. Is\n there any rule of thumb?\n\n Kind regards\n Yiannis",
"msg_date": "Sat, 21 Jul 2012 20:16:36 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On Sat, Jul 21, 2012 at 4:16 PM, Ioannis Anagnostopoulos\n<[email protected]> wrote:\n> I am not sure that I can see an improvement, at least on src_id that have\n> lots of msg_id per day the query never returned even 5 hours later running\n> \"exaplain analyze\". For smaller src_id\n> (message wise) there might be some improvement or it was just the analyse\n> that I run. As I said the stats goes quickly out of scope because of the big\n> number of updates. So it looks like that\n> it is not the \"funny\" \"where\" concatenation or some kind of index\n> construction problem. Which brings us back to the issue of the\n> \"statistics_target\" on per column. My problem is that given the\n> query plan I provided you yesterday, I am not sure which columns\n> statistics_target to touch and what short of number to introduce. Is there\n> any rule of thumb?\n\nWhat's the size of your index, tables, and such?\nIn GB I mean, not tuples.\n",
"msg_date": "Sat, 21 Jul 2012 16:19:20 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 21/07/2012 20:19, Claudio Freire wrote:\n> On Sat, Jul 21, 2012 at 4:16 PM, Ioannis Anagnostopoulos\n> <[email protected]> wrote:\n>> I am not sure that I can see an improvement, at least on src_id that have\n>> lots of msg_id per day the query never returned even 5 hours later running\n>> \"exaplain analyze\". For smaller src_id\n>> (message wise) there might be some improvement or it was just the analyse\n>> that I run. As I said the stats goes quickly out of scope because of the big\n>> number of updates. So it looks like that\n>> it is not the \"funny\" \"where\" concatenation or some kind of index\n>> construction problem. Which brings us back to the issue of the\n>> \"statistics_target\" on per column. My problem is that given the\n>> query plan I provided you yesterday, I am not sure which columns\n>> statistics_target to touch and what short of number to introduce. Is there\n>> any rule of thumb?\n> What's the size of your index, tables, and such?\n> In GB I mean, not tuples.\nThe message_copies_wk2 that I currently hit is 13GB and 11 the Indexes, the\nship_a_pos_messages_wk2 is 17GB and 2.5MB the index and the ship_objects\nis 150MB table and index approx.\n\nYiannis\n",
"msg_date": "Sat, 21 Jul 2012 20:24:24 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On Fri, Jul 20, 2012 at 6:19 PM, Ioannis Anagnostopoulos\n<[email protected]> wrote:\n> (feed_all_y2012m07.ship_pos_messages join\n> ais_server.ship_objects on (ship_pos_messages.obj_id = ship_objects.obj_id))\n> on (message_copies.msg_id = ship_pos_messages.msg_id)\n\nIt's this part of the query that's taking 3.2 hours.\n\nMove the filtered message_copies to a CTE, and the filtered\nship_pos_messages join to another CTE. That should (in my experience)\nget you better performance.\n",
"msg_date": "Sat, 21 Jul 2012 17:10:51 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On Sat, Jul 21, 2012 at 5:10 PM, Claudio Freire <[email protected]> wrote:\n> <[email protected]> wrote:\n>> (feed_all_y2012m07.ship_pos_messages join\n>> ais_server.ship_objects on (ship_pos_messages.obj_id = ship_objects.obj_id))\n>> on (message_copies.msg_id = ship_pos_messages.msg_id)\n>\n> It's this part of the query that's taking 3.2 hours.\n>\n> Move the filtered message_copies to a CTE, and the filtered\n> ship_pos_messages join to another CTE. That should (in my experience)\n> get you better performance.\n\nBtw... did you try the hash thing?\n",
"msg_date": "Sat, 21 Jul 2012 17:11:27 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "On 21/07/2012 21:11, Claudio Freire wrote:\n> On Sat, Jul 21, 2012 at 5:10 PM, Claudio Freire <[email protected]> wrote:\n>> <[email protected]> wrote:\n>>> (feed_all_y2012m07.ship_pos_messages join\n>>> ais_server.ship_objects on (ship_pos_messages.obj_id = ship_objects.obj_id))\n>>> on (message_copies.msg_id = ship_pos_messages.msg_id)\n>> It's this part of the query that's taking 3.2 hours.\n>>\n>> Move the filtered message_copies to a CTE, and the filtered\n>> ship_pos_messages join to another CTE. That should (in my experience)\n>> get you better performance.\n> Btw... did you try the hash thing?\nNot yet as I am trying at present to simplify the index getting the \ngeorefs out of it. Don't know if this is a good idea but I though that \nsince I am not testing (yet) any equality other than making sure that \nthe georefs are not null or empty, I could avoid having it in the index, \nthus reducing its size a lot... At least for now.....\n",
"msg_date": "Sat, 21 Jul 2012 21:29:28 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A very long running query...."
},
{
"msg_contents": "Ioannis Anagnostopoulos <[email protected]> wrote:\n \n> I have stripped completely the database from additional indexes,\n> those that possible delay the insertion process, of course\n> maintaining the pkey and 2 or three absolutely mandatory indexes\n> for my select queries. As a result I have a sleek and steady\n> performance of around 0.70 msec per insertion.\n \nNot bad!\n \n> However I have now closed a full circle as I have a fast database\n> but when I try to \"select\", making optimum usage of the left over\n> indexes, the insertion process slows down. Yes my selections are\n> huge (they are not slow, just huge as it is about geographical\n> points etc) but I am asking if there is anyway that I can\n> \"prioritise\" the insertions over the \"selections\". These\n> \"selections\" are happening anyway as batch process during night so\n> I don't really mind if they will take 2 or 5 hours, as long as\n> they are ready at 9.00am next day.\n \nYou could try adding back indexes on the most critical columns, one\nat a time. You might want to try single-column indexes, rather than\nthe wide ones you had before. The narrower keys may cut the cost of\nmaintaining the indexes enough to tolerate a few, and PostgreSQL can\noften combine multiple indexes using \"bitmap index scans\".\n \nYou could also play with \"nice\" and \"ionice\" to reduce priority of\nthe \"select\" processes, but watch any such attempt very carefully\nuntil you see what the impact really is.\n \nSince you seem to be relatively satisfied with where you are now,\nyou should make small changes and be prepared to revert them if\ninsert performance drops off too much.\n \n-Kevin\n",
"msg_date": "Mon, 30 Jul 2012 17:43:13 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index slow down insertions..."
}
] |
[
{
"msg_contents": "I have a single *table* that is some 560GB in size, 6 columns, average\nrow width 63.\nThere are approximately 6.1 billion rows.\nIt has 4 indices, 131, 313, 131 and 190 GB in size, respectively. All\nare btree indices.\n\nI tried inserting new data into the table, and it's taking a *very* long time.\nI pre-built the data to be inserted into a temporary table with the\nexact same structure and column ordering, etc, and the temporary table\nis about 8.5GB in size with about 93 million rows.\nThe temporary table was built in about 95 seconds.\nThe insert has been going for 47 hours and 21 minutes, give or take.\nI'm not doing any correlation or filtering, etc -- straight up\ninsert, literally \"insert into big_table select * from\nthe_temp_table;\".\n\nvmstat output doesn't seem that useful, with disk wait being 10-15%\nand I/O speeds highly variable, from 5-20MB/s reads couple with\n0-16MB/s writes, generally on the lower end of these.\nstrace of the inserting process shows that it's basically hammering\nthe disk in terms of random reads and infrequent writes.\npostgresql. It's not verifying, rebuilding, etc. While this process is\nactive, streaming write I/O is terrible - 36MB/s. WIth it \"paused\"\n(via strace) I get 72MB/s. (reads are 350MB/s).\n\nThe OS is Scientific Linux 6.2, and the version of postgresql is 9.1.4\n- x86_64. There is nothing else of note happening on the box. The box\nis a quad CPU, dual-core each Xeon E5430 @ 2.66GHz with 32GB of RAM\nand a 3ware 9690 RAID 4TB RAID10 for the storage for\n\nWhat might be going on here?\n\n\n-- \nJon\n",
"msg_date": "Mon, 16 Jul 2012 08:37:36 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "very very slow inserts into very large table"
},
{
"msg_contents": "On 16/07/12 14:37, Jon Nelson wrote:\n> I have a single *table* that is some 560GB in size, 6 columns, average\n> row width 63.\n> There are approximately 6.1 billion rows.\n> It has 4 indices, 131, 313, 131 and 190 GB in size, respectively. All\n> are btree indices.\n>\n> I tried inserting new data into the table, and it's taking a *very* long time.\n> I pre-built the data to be inserted into a temporary table with the\n> exact same structure and column ordering, etc, and the temporary table\n> is about 8.5GB in size with about 93 million rows.\n> The temporary table was built in about 95 seconds.\n> The insert has been going for 47 hours and 21 minutes, give or take.\n> I'm not doing any correlation or filtering, etc -- straight up\n> insert, literally \"insert into big_table select * from\n> the_temp_table;\".\n>\n> vmstat output doesn't seem that useful, with disk wait being 10-15%\n> and I/O speeds highly variable, from 5-20MB/s reads couple with\n> 0-16MB/s writes, generally on the lower end of these.\n> strace of the inserting process shows that it's basically hammering\n> the disk in terms of random reads and infrequent writes.\n> postgresql. It's not verifying, rebuilding, etc. While this process is\n> active, streaming write I/O is terrible - 36MB/s. WIth it \"paused\"\n> (via strace) I get 72MB/s. (reads are 350MB/s).\n>\n> The OS is Scientific Linux 6.2, and the version of postgresql is 9.1.4\n> - x86_64. There is nothing else of note happening on the box. The box\n> is a quad CPU, dual-core each Xeon E5430 @ 2.66GHz with 32GB of RAM\n> and a 3ware 9690 RAID 4TB RAID10 for the storage for\n>\n> What might be going on here?\n>\n>\nEvery insert updates four indexes, so at least 3 of those will be in \nrandom order. The indexes don't fit in memory, so all those updates will \ninvolve reading most of the relevant b-tree pages from disk (or at least \nthe leaf level). A total of 10ms of random read from disk (per inserted \nrow) wouldn't surprise me ... which adds up to more than 10 days for \nyour 93 million rows.\n\nMark Thornton\n",
"msg_date": "Mon, 16 Jul 2012 15:06:09 +0100",
"msg_from": "Mark Thornton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton <[email protected]> wrote:\n\n>\n>> Every insert updates four indexes, so at least 3 of those will be in\n> random order. The indexes don't fit in memory, so all those updates will\n> involve reading most of the relevant b-tree pages from disk (or at least\n> the leaf level). A total of 10ms of random read from disk (per inserted\n> row) wouldn't surprise me ... which adds up to more than 10 days for your\n> 93 million rows.\n\n\nWhich is the long way of saying that you will likely benefit from\npartitioning that table into a number of smaller tables, especially if\nqueries on that table tend to access only a subset of the data that can be\ndefined to always fit into a smaller number of partitions than the total.\n At the very least, inserts will be faster because individual indexes will\nbe smaller. But unless all queries can't be constrained to fit within a\nsubset of partitions, you'll also see improved performance on selects.\n\n--sam\n\nOn Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton <[email protected]> wrote:\n\n\nEvery insert updates four indexes, so at least 3 of those will be in random order. The indexes don't fit in memory, so all those updates will involve reading most of the relevant b-tree pages from disk (or at least the leaf level). A total of 10ms of random read from disk (per inserted row) wouldn't surprise me ... which adds up to more than 10 days for your 93 million rows.\nWhich is the long way of saying that you will likely benefit from partitioning that table into a number of smaller tables, especially if queries on that table tend to access only a subset of the data that can be defined to always fit into a smaller number of partitions than the total. At the very least, inserts will be faster because individual indexes will be smaller. But unless all queries can't be constrained to fit within a subset of partitions, you'll also see improved performance on selects.\n--sam",
"msg_date": "Mon, 16 Jul 2012 10:35:32 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On Mon, Jul 16, 2012 at 12:35 PM, Samuel Gendler\n<[email protected]> wrote:\n> On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton <[email protected]> wrote:\n>>>\n>>>\n>> Every insert updates four indexes, so at least 3 of those will be in\n>> random order. The indexes don't fit in memory, so all those updates will\n>> involve reading most of the relevant b-tree pages from disk (or at least the\n>> leaf level). A total of 10ms of random read from disk (per inserted row)\n>> wouldn't surprise me ... which adds up to more than 10 days for your 93\n>> million rows.\n>\n>\n> Which is the long way of saying that you will likely benefit from\n> partitioning that table into a number of smaller tables, especially if\n> queries on that table tend to access only a subset of the data that can be\n> defined to always fit into a smaller number of partitions than the total.\n> At the very least, inserts will be faster because individual indexes will be\n> smaller. But unless all queries can't be constrained to fit within a subset\n> of partitions, you'll also see improved performance on selects.\n\nAcknowledged. My data is actually partitioned into individual tables,\nbut this was an experiment to see what the performance was like. I was\nexpecting that effectively appending all of the individual tables into\na great big table would result in less redundant information being\nstored in indices and, therefore, a bit more speed and efficiency.\nHowever, I have to admit I was very surprised at the performance\nreduction.\n\nWhat is the greater lesson to take away, here? If you are working with\ndata that is larger (substantially larger) than available memory, is\nthe architecture and design of postgresql such that the only real\napproach is some type of data partitioning? It is not my intent to\ninsult or even disparage my favorite software, but it took less time\nto *build* the indices for 550GB of data than it would have to insert\n1/20th as much. That doesn't seem right.\n\n-- \nJon\n",
"msg_date": "Mon, 16 Jul 2012 12:56:12 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On Mon, Jul 16, 2012 at 10:35 AM, Samuel Gendler\n<[email protected]> wrote:\n> On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton <[email protected]> wrote:\n>>>\n>>>\n>> Every insert updates four indexes, so at least 3 of those will be in\n>> random order. The indexes don't fit in memory, so all those updates will\n>> involve reading most of the relevant b-tree pages from disk (or at least the\n>> leaf level). A total of 10ms of random read from disk (per inserted row)\n>> wouldn't surprise me ... which adds up to more than 10 days for your 93\n>> million rows.\n>\n>\n> Which is the long way of saying that you will likely benefit from\n> partitioning that table into a number of smaller tables, especially if\n> queries on that table tend to access only a subset of the data that can be\n> defined to always fit into a smaller number of partitions than the total.\n> At the very least, inserts will be faster because individual indexes will be\n> smaller.\n\nIf the select locality and the insert locality are not the same, and\nthe table is partitioned according to the select locality, then the\ntotal index size needed to be accessed during the inserts will be\nslightly larger, not smaller, under the partitioning and the inserts\nwill not perform well.\n\nOn the other hand, if the select locality and the insert locality are\nthe same, it should be possible to change the index definitions in a\nway to get all the gains of your described partitioning, without\nactually doing the partitioning.\n\n> But unless all queries can't be constrained to fit within a subset\n> of partitions, you'll also see improved performance on selects.\n\nWhen you can't constrain the queries to fit within a subset of the\npartitions is where I see a possible win from partitioning that can't\nbe obtained other ways. By using partitioning, you can greatly\nincrease the insert performance by imposing a small cost on each\nquery. The total cost is at least as great, but you have re-arranged\nhow the cost is amortized into a more acceptable shape.\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 16 Jul 2012 11:28:33 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On 16/07/12 18:56, Jon Nelson wrote:\n> It is not my intent to\n> insult or even disparage my favorite software, but it took less time\n> to*build* the indices for 550GB of data than it would have to insert\n> 1/20th as much. That doesn't seem right.\nMy explanation would apply to many databases, not just Postgres.\n\nTo speed up the insert there are a number of possible approaches:\n\n1. Partition the data and then sort the temporary table into groups \nbased on the partitioning. Best of all if all the new data goes into a \nsingle partition.\n\n2. Drop the indexes before insert and rebuild afterwards.\n\n3. Reduce the number of indexes. If you only have one index, you can \nsort the data to be inserted in the natural order of the index. If you \nmust have more than one index you could still sort the new data in the \norder of one of them to obtain a modest improvement in locality.\n\n4. The most efficient way for the database itself to do the updates \nwould be to first insert all the data in the table, and then update each \nindex in turn having first sorted the inserted keys in the appropriate \norder for that index.\n\nMark\n\n\n\n\n\n\n\n\nOn 16/07/12 18:56, Jon Nelson wrote:\n\n\nIt is not my intent to\ninsult or even disparage my favorite software, but it took less time\nto *build* the indices for 550GB of data than it would have to insert\n1/20th as much. That doesn't seem right.\n\n My explanation would apply to many databases, not just Postgres.\n\n To speed up the insert there are a number of possible approaches:\n\n 1. Partition the data and then sort the temporary table into groups\n based on the partitioning. Best of all if all the new data goes into\n a single partition.\n\n 2. Drop the indexes before insert and rebuild afterwards.\n\n 3. Reduce the number of indexes. If you only have one index, you can\n sort the data to be inserted in the natural order of the index. If\n you must have more than one index you could still sort the new data\n in the order of one of them to obtain a modest improvement in\n locality.\n\n 4. The most efficient way for the database itself to do the updates\n would be to first insert all the data in the table, and then update\n each index in turn having first sorted the inserted keys in the\n appropriate order for that index.\n\n Mark",
"msg_date": "Mon, 16 Jul 2012 19:59:07 +0100",
"msg_from": "Mark Thornton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton <[email protected]> wrote:\n> 4. The most efficient way for the database itself to do the updates would be\n> to first insert all the data in the table, and then update each index in\n> turn having first sorted the inserted keys in the appropriate order for that\n> index.\n\nActually, it should create a temporary index btree and merge[0] them.\nOnly worth if there are really a lot of rows.\n\n[0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf\n",
"msg_date": "Mon, 16 Jul 2012 16:08:14 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On 16/07/12 20:08, Claudio Freire wrote:\n> On Mon, Jul 16, 2012 at 3:59 PM, Mark Thornton <[email protected]> wrote:\n>> 4. The most efficient way for the database itself to do the updates would be\n>> to first insert all the data in the table, and then update each index in\n>> turn having first sorted the inserted keys in the appropriate order for that\n>> index.\n> Actually, it should create a temporary index btree and merge[0] them.\n> Only worth if there are really a lot of rows.\n>\n> [0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf\nI think 93 million would qualify as a lot of rows. However does any \navailable database (commercial or open source) use this optimisation.\n\nMark\n\n\n",
"msg_date": "Mon, 16 Jul 2012 20:16:11 +0100",
"msg_from": "Mark Thornton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On Mon, Jul 16, 2012 at 4:16 PM, Mark Thornton <[email protected]> wrote:\n>> Actually, it should create a temporary index btree and merge[0] them.\n>> Only worth if there are really a lot of rows.\n>>\n>> [0] http://www.ccs.neu.edu/home/bradrui/index_files/parareorg.pdf\n>\n> I think 93 million would qualify as a lot of rows. However does any\n> available database (commercial or open source) use this optimisation.\n\nDatabases, I honestly don't know. But I do know most document\nretrieval engines use a similar technique with inverted indexes.\n",
"msg_date": "Mon, 16 Jul 2012 17:01:21 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On 07/17/2012 01:56 AM, Jon Nelson wrote:\n> What is the greater lesson to take away, here? If you are working with \n> data that is larger (substantially larger) than available memory, is \n> the architecture and design of postgresql such that the only real \n> approach is some type of data partitioning? It is not my intent to \n> insult or even disparage my favorite software, but it took less time \n> to *build* the indices for 550GB of data than it would have to insert \n> 1/20th as much. That doesn't seem right. \n\nTo perform reasonably well, Pg would need to be able to defer index \nupdates when bulk-loading data in a single statement (or even \ntransaction), then apply them when the statement finished or transaction \ncommitted. Doing this at a transaction level would mean you'd need a way \nto mark indexes as 'lazily updated' and have Pg avoid using them once \nthey'd been dirtied within a transaction. No such support currently \nexists, and it'd be non-trivial to implement, especially since people \nloading huge amounts of data often want to do it with multiple \nconcurrent sessions. You'd need some kind of 'DISABLE INDEX' and 'ENABLE \nINDEX' commands plus a transactional backing table of pending index updates.\n\nNot simple.\n\n\nRight now, Pg is trying to keep the index consistent the whole time. \nThat involves moving a heck of a lot of data around - repeatedly.\n\nSetting a lower FILLFACTOR on your indexes can give Pg some breathing \nroom here, but only a limited amount, and at the cost of reduced scan \nefficiency.\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 17 Jul 2012 11:30:56 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "2012/07/16 22:37, Jon Nelson wrote:\n> I have a single *table* that is some 560GB in size, 6 columns, average\n> row width 63.\n> There are approximately 6.1 billion rows.\n> It has 4 indices, 131, 313, 131 and 190 GB in size, respectively. All\n> are btree indices.\n>\n> I tried inserting new data into the table, and it's taking a *very* long time.\n> I pre-built the data to be inserted into a temporary table with the\n> exact same structure and column ordering, etc, and the temporary table\n> is about 8.5GB in size with about 93 million rows.\n> The temporary table was built in about 95 seconds.\n> The insert has been going for 47 hours and 21 minutes, give or take.\n> I'm not doing any correlation or filtering, etc -- straight up\n> insert, literally \"insert into big_table select * from\n> the_temp_table;\".\n>\n> vmstat output doesn't seem that useful, with disk wait being 10-15%\n> and I/O speeds highly variable, from 5-20MB/s reads couple with\n> 0-16MB/s writes, generally on the lower end of these.\n> strace of the inserting process shows that it's basically hammering\n> the disk in terms of random reads and infrequent writes.\n> postgresql. It's not verifying, rebuilding, etc. While this process is\n> active, streaming write I/O is terrible - 36MB/s. WIth it \"paused\"\n> (via strace) I get 72MB/s. (reads are 350MB/s).\n\nI think the most possible reason could exists around WAL and its\nbuffers.\n\nBut it's just my guess, and you need to determine a cause of the\nsituation precisely. Disk I/O operations must be broken down\ninto the PostgreSQL context, such as block reads, wal writes or bgwiter.\n\nIf you want to know what's actually going on inside PostgreSQL,\npgstatview may help you that.\n\nhttp://pgsnaga.blogspot.jp/2012/06/pgstatview-visualize-your-postgresql-in.html\nhttp://www2.uptimeforce.com/pgstatview/\n\npgstatview provides an easy way not only to visualize your performance\nstatistics while workload, but also to share it with the PostgreSQL\nexperts.\n\nHere is an example of the report:\nhttp://www2.uptimeforce.com/pgstatview/a9ee29aa84668cca2d8cdfd2556d370c/\n\nI believe you can find some thoughts from visualizing and comparing\nyour statistics between your temp table and regular table.\n\nRegards,\n\n>\n> The OS is Scientific Linux 6.2, and the version of postgresql is 9.1.4\n> - x86_64. There is nothing else of note happening on the box. The box\n> is a quad CPU, dual-core each Xeon E5430 @ 2.66GHz with 32GB of RAM\n> and a 3ware 9690 RAID 4TB RAID10 for the storage for\n>\n> What might be going on here?\n>\n>\n\n-- \nSatoshi Nagayasu <[email protected]>\nUptime Technologies, LLC. http://www.uptime.jp\n\n",
"msg_date": "Tue, 17 Jul 2012 12:49:01 +0900",
"msg_from": "Satoshi Nagayasu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 6:30 AM, Craig Ringer <[email protected]> wrote:\n> On 07/17/2012 01:56 AM, Jon Nelson wrote:\n> To perform reasonably well, Pg would need to be able to defer index updates\n> when bulk-loading data in a single statement (or even transaction), then\n> apply them when the statement finished or transaction committed. Doing this\n> at a transaction level would mean you'd need a way to mark indexes as\n> 'lazily updated' and have Pg avoid using them once they'd been dirtied\n> within a transaction. No such support currently exists, and it'd be\n> non-trivial to implement, especially since people loading huge amounts of\n> data often want to do it with multiple concurrent sessions. You'd need some\n> kind of 'DISABLE INDEX' and 'ENABLE INDEX' commands plus a transactional\n> backing table of pending index updates.\n\nIt seems to me that if the insertion is done as a single statement it\nwouldn't be a problem to collect up all btree insertions and apply\nthem before completing the statement. I'm not sure how much that would\nhelp though. If the new rows have uniform distribution you end up\nreading in the whole index anyway. Because indexes are not stored in\nlogical order you don't get to benefit from sequential I/O.\n\nThe lazy merging approach (the paper that Claudio linked) on the other\nhand seems promising but a lot trickier to implement.\n\nRegards,\nAnts Aasma\n-- \nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26\nA-2700 Wiener Neustadt\nWeb: http://www.postgresql-support.de\n",
"msg_date": "Tue, 17 Jul 2012 18:59:43 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 8:59 AM, Ants Aasma <[email protected]> wrote:\n> On Tue, Jul 17, 2012 at 6:30 AM, Craig Ringer <[email protected]> wrote:\n>> On 07/17/2012 01:56 AM, Jon Nelson wrote:\n>> To perform reasonably well, Pg would need to be able to defer index updates\n>> when bulk-loading data in a single statement (or even transaction), then\n>> apply them when the statement finished or transaction committed. Doing this\n>> at a transaction level would mean you'd need a way to mark indexes as\n>> 'lazily updated' and have Pg avoid using them once they'd been dirtied\n>> within a transaction. No such support currently exists, and it'd be\n>> non-trivial to implement, especially since people loading huge amounts of\n>> data often want to do it with multiple concurrent sessions. You'd need some\n>> kind of 'DISABLE INDEX' and 'ENABLE INDEX' commands plus a transactional\n>> backing table of pending index updates.\n>\n> It seems to me that if the insertion is done as a single statement it\n> wouldn't be a problem to collect up all btree insertions and apply\n> them before completing the statement. I'm not sure how much that would\n> help though. If the new rows have uniform distribution you end up\n> reading in the whole index anyway. Because indexes are not stored in\n> logical order you don't get to benefit from sequential I/O.\n\nIn this case, he is loading new data that is 5% of the current data\nsize. A leaf page probably has much more than 20 entries, so by\nsorting them you could turn many scattered accesses to the same page\nto one access (or many accesses that immediately follow each other,\nand so are satisfied by the cache).\n\nAlso, while indexes are not formally kept in logical order, but they\ndo tend to be biased in that direction in most cases. I've found that\neven if you are only inserting one row for every 4 or 5 leaf pages,\nyou still get substantial improvement by doing so in sorted order.\n\nCheers,\n\nJeff\n",
"msg_date": "Tue, 17 Jul 2012 09:24:37 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 1:24 PM, Jeff Janes <[email protected]> wrote:\n> Also, while indexes are not formally kept in logical order, but they\n> do tend to be biased in that direction in most cases. I've found that\n> even if you are only inserting one row for every 4 or 5 leaf pages,\n> you still get substantial improvement by doing so in sorted order.\n\nYep, I do the same. Whenever I have to perform massive updates, I sort them.\n\nAlthough \"massive\" for me is nowhere near what \"massive\" for the OP is.\n",
"msg_date": "Tue, 17 Jul 2012 13:55:49 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very very slow inserts into very large table"
}
] |
[
{
"msg_contents": "Howdy,\n\nI've got a couple of tables that are taking a little longer than normal to extend, resulting \nin some slow inserts.\n\nThey're fairly large tables, ~200GB pg_total_relation_size (90GB for just the table)\n\nI suspect that this is related to a sustained heavy load that would stop autovacuum from\ngetting at this table... Does that sound plausible? \n\nI'm wondering what options I have to smooth over these episodes / speed up the extensions.\nI'm thinking of something like, CLUSTER or VACUUM FULL (those take quite a run so I'd like \nsome direction on it before i TiaS =) )\n\nI suspect that Partitioning would help. Any other ideas?\n\n\nJul 17 08:11:52 perf: [3-1] user=test,db=perf LOG: process 11812 still waiting for ExclusiveLock \non extension of relation 60777 of database 16387 after 1000.270 ms\n\nSystem resouces were fine:\n\nPGDATA\n------\n07/17/12 08:11:48\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\ndm-2 1.20 3085.20 77.20 3994.20 15363.20 56680.00 17.69 15.57 3.82 0.06 26.22\n\n07/17/12 08:11:53\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\ndm-2 0.40 2097.20 51.80 2610.20 10344.00 37659.20 18.03 5.23 1.96 0.05 14.28\n\n\nPGXLOG\n------\n07/17/12 08:11:48\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\ndm-4 0.00 3958.20 0.00 600.40 0.00 36449.60 60.71 0.44 0.74 0.73 43.54\n\n07/17/12 08:11:53\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\ndm-4 0.00 2905.20 0.00 403.40 0.00 26492.80 65.67 0.32 0.80 0.79 31.96\n\nCPU\n------\n CPU %user %nice %system %iowait %steal %idle\n08:11:48 all 24.49 0.00 3.19 1.17 0.00 71.15\n08:11:53 all 17.53 0.00 3.13 0.68 0.00 78.65\n\n",
"msg_date": "Tue, 17 Jul 2012 08:57:52 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Process 11812 still waiting for ExclusiveLock on extension of\n relation"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jul 17, 2012 at 7:57 PM, David Kerr <[email protected]> wrote:\n> I suspect that this is related to a sustained heavy load that would stop autovacuum from\n> getting at this table... Does that sound plausible?\n\nWell, not sure. Let us look at the table's statistics first.\n\n\\x\nselect * from pg_stat_user_tables where relname = 'yourtablename';\n\n> I'm wondering what options I have to smooth over these episodes / speed up the extensions.\n> I'm thinking of something like, CLUSTER or VACUUM FULL (those take quite a run so I'd like\n> some direction on it before i TiaS =) )\n\nInstead of CLUSTER I would suggest you to use one of the tools below.\nThey do not block the table as CLUSTER does.\n\npg_reorg http://reorg.projects.postgresql.org/pg_reorg.html\nFaster, but requires a lot of IO and additional disk space, also it\nneeds PK on the table.\n\npgcompactor http://code.google.com/p/pgtoolkit/\nAllows to smooth IO, auto-determines reorganizing necessity for tables\nand indexes, no PK restriction.\n\n> I suspect that Partitioning would help. Any other ideas?\n\nPartitioning is a good thing to think about when you deal with big tables.\n\n>\n>\n> Jul 17 08:11:52 perf: [3-1] user=test,db=perf LOG: process 11812 still waiting for ExclusiveLock\n> on extension of relation 60777 of database 16387 after 1000.270 ms\n>\n> System resouces were fine:\n>\n> PGDATA\n> ------\n> 07/17/12 08:11:48\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\n> dm-2 1.20 3085.20 77.20 3994.20 15363.20 56680.00 17.69 15.57 3.82 0.06 26.22\n>\n> 07/17/12 08:11:53\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\n> dm-2 0.40 2097.20 51.80 2610.20 10344.00 37659.20 18.03 5.23 1.96 0.05 14.28\n>\n>\n> PGXLOG\n> ------\n> 07/17/12 08:11:48\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\n> dm-4 0.00 3958.20 0.00 600.40 0.00 36449.60 60.71 0.44 0.74 0.73 43.54\n>\n> 07/17/12 08:11:53\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\n> dm-4 0.00 2905.20 0.00 403.40 0.00 26492.80 65.67 0.32 0.80 0.79 31.96\n>\n> CPU\n> ------\n> CPU %user %nice %system %iowait %steal %idle\n> 08:11:48 all 24.49 0.00 3.19 1.17 0.00 71.15\n> 08:11:53 all 17.53 0.00 3.13 0.68 0.00 78.65\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSergey Konoplev\n\na database architect, software developer at PostgreSQL-Consulting.com\nhttp://www.postgresql-consulting.com\n\nJabber: [email protected] Skype: gray-hemp Phone: +79160686204\n",
"msg_date": "Wed, 18 Jul 2012 16:08:47 +0400",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process 11812 still waiting for ExclusiveLock on\n\textension of relation"
},
{
"msg_contents": "\nOn Jul 18, 2012, at 5:08 AM, Sergey Konoplev wrote:\n\n> Hi,\n> \n> On Tue, Jul 17, 2012 at 7:57 PM, David Kerr <[email protected]> wrote:\n>> I suspect that this is related to a sustained heavy load that would stop autovacuum from\n>> getting at this table... Does that sound plausible?\n> \n> Well, not sure. Let us look at the table's statistics first.\n> \n> \\x\n> select * from pg_stat_user_tables where relname = 'yourtablename';\nthe load is controlled and only lasts a few hours. at this point auto vacuum has gotten to the table and done it's thing.\n\n> \n>> I'm wondering what options I have to smooth over these episodes / speed up the extensions.\n>> I'm thinking of something like, CLUSTER or VACUUM FULL (those take quite a run so I'd like\n>> some direction on it before i TiaS =) )\n> \n> Instead of CLUSTER I would suggest you to use one of the tools below.\n> They do not block the table as CLUSTER does.\n> \n> pg_reorg http://reorg.projects.postgresql.org/pg_reorg.html\n> Faster, but requires a lot of IO and additional disk space, also it\n> needs PK on the table.\n> \n> pgcompactor http://code.google.com/p/pgtoolkit/\n> Allows to smooth IO, auto-determines reorganizing necessity for tables\n> and indexes, no PK restriction.\n\nI haven't given these projects much thought in the past, but I guess we're getting to the size where that sort\nof thing might come in handy. I'll have a look.\n\n> \n>> I suspect that Partitioning would help. Any other ideas?\n> \n> Partitioning is a good thing to think about when you deal with big tables.\n\nYeah. unless you're using hibernate which expects inserts to return the # of rows entered (unless\nyou disable that) which we are. or you have fairly dynamic data that doesn't have a great partition key.\n\n\nthanks",
"msg_date": "Wed, 18 Jul 2012 18:43:28 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Process 11812 still waiting for ExclusiveLock on extension of\n\trelation"
}
] |
[
{
"msg_contents": "We're seeing slow application performance on a PostgreSQL 9.1 server which\nappears to be relatively lightly loaded. Some graphs from pgstatview are\nat http://www2.uptimeforce.com/pgstatview/e35ba4e7db0842a1b9cf2e10a4c03d91/\n These cover approximately 40 minutes, during which there was some activity\nfrom a web application and two bulk loads in process.\n\nThe machine running the bulk loads (perl scripts) is also running at about\n70% idle with very little iowait. That seems to suggest network latency to\nme.\n\nAm I missing something in the server stats that would indicate a problem?\n If not, where should I look next?\n\n__________________________________________________________________________________\n*Mike Blackwell | Technical Analyst, Distribution Services/Rollout\nManagement | RR Donnelley*\n1750 Wallace Ave | St Charles, IL 60174-3401\nOffice: 630.313.7818\[email protected]\nhttp://www.rrdonnelley.com\n\n\n<http://www.rrdonnelley.com/>\n* <[email protected]>*\n\nWe're seeing slow application performance on a PostgreSQL 9.1 server which appears to be relatively lightly loaded. Some graphs from pgstatview are at \nhttp://www2.uptimeforce.com/pgstatview/e35ba4e7db0842a1b9cf2e10a4c03d91/ These cover approximately 40 minutes, during which there was some activity from a web application and two bulk loads in process. \nThe machine running the bulk loads (perl scripts) is also running at about 70% idle with very little iowait. That seems to suggest network latency to me.Am I missing something in the server stats that would indicate a problem? If not, where should I look next? \n__________________________________________________________________________________\nMike Blackwell | Technical Analyst, Distribution Services/Rollout Management | RR Donnelley\n1750 Wallace Ave | St Charles, IL 60174-3401 \nOffice: 630.313.7818 \[email protected]\nhttp://www.rrdonnelley.com",
"msg_date": "Tue, 17 Jul 2012 11:27:23 -0500",
"msg_from": "Mike Blackwell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow application response on lightly loaded server?"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 10:27 AM, Mike Blackwell <[email protected]> wrote:\n> We're seeing slow application performance on a PostgreSQL 9.1 server which\n> appears to be relatively lightly loaded. Some graphs from pgstatview are at\n> http://www2.uptimeforce.com/pgstatview/e35ba4e7db0842a1b9cf2e10a4c03d91/\n> These cover approximately 40 minutes, during which there was some activity\n> from a web application and two bulk loads in process.\n>\n> The machine running the bulk loads (perl scripts) is also running at about\n> 70% idle with very little iowait. That seems to suggest network latency to\n> me.\n>\n> Am I missing something in the server stats that would indicate a problem?\n> If not, where should I look next?\n\nI'd run vmstat and look for high cs or int numbers (100k and above) to\nsee if you're maybe seeing an issue with that. A lot of times a\n\"slow\" server is just too much process switching. But yeah, the\ngraphs you've posted don't seem overly bad.\n",
"msg_date": "Tue, 17 Jul 2012 10:35:03 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow application response on lightly loaded server?"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 11:35 AM, Scott Marlowe <[email protected]>\n wrote:\n\nI'd run vmstat and look for high cs or int numbers (100k and above) to\n> see if you're maybe seeing an issue with that. A lot of times a\n> \"slow\" server is just too much process switching. But yeah, the\n> graphs you've posted don't seem overly bad.\n>\n\n\nThanks for the tip. Here's a quick look at those numbers under that same\nload. Watching it for a while longer didn't show any spikes. That doesn't\nseem to be it, either.\n\n$ vmstat 5\nprocs -----------memory---------- ---swap-- -----io---- --system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id\nwa\n 3 0 11868 34500 16048 3931436 0 0 4 2 0 0 6 2\n91 1\n 2 0 11868 21964 16088 3931396 0 0 0 212 8667 8408 15 3\n80 2\n 0 0 11868 37772 16112 3932152 0 0 2 249 9109 8811 34 2\n62 1\n 2 0 11868 34068 16124 3932400 0 0 1 168 9142 9165 12 3\n84 1\n 1 0 11868 38036 16124 3932920 0 0 8 155 9995 10904 16 4\n80 1\n 1 0 11868 40212 16124 3933440 0 0 0 146 9586 9825 13 3\n83 1\n\n__________________________________________________________________________________\n*Mike Blackwell | Technical Analyst, Distribution Services/Rollout\nManagement | RR Donnelley*\n1750 Wallace Ave | St Charles, IL 60174-3401\nOffice: 630.313.7818\[email protected]\nhttp://www.rrdonnelley.com\n\n\n<http://www.rrdonnelley.com/>\n\nOn Tue, Jul 17, 2012 at 11:35 AM, Scott Marlowe <[email protected]> wrote:\nI'd run vmstat and look for high cs or int numbers (100k and above) to\nsee if you're maybe seeing an issue with that. A lot of times a\"slow\" server is just too much process switching. But yeah, thegraphs you've posted don't seem overly bad.\nThanks for the tip. Here's a quick look at those numbers under that same load. Watching it for a while longer didn't show any spikes. That doesn't seem to be it, either. \n$ vmstat 5procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 3 0 11868 34500 16048 3931436 0 0 4 2 0 0 6 2 91 1\n 2 0 11868 21964 16088 3931396 0 0 0 212 8667 8408 15 3 80 2 0 0 11868 37772 16112 3932152 0 0 2 249 9109 8811 34 2 62 1 2 0 11868 34068 16124 3932400 0 0 1 168 9142 9165 12 3 84 1\n 1 0 11868 38036 16124 3932920 0 0 8 155 9995 10904 16 4 80 1 1 0 11868 40212 16124 3933440 0 0 0 146 9586 9825 13 3 83 1\n__________________________________________________________________________________\nMike Blackwell | Technical Analyst, Distribution Services/Rollout Management | RR Donnelley\n1750 Wallace Ave | St Charles, IL 60174-3401 \nOffice: 630.313.7818 \[email protected]\nhttp://www.rrdonnelley.com",
"msg_date": "Tue, 17 Jul 2012 12:37:43 -0500",
"msg_from": "Mike Blackwell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow application response on lightly loaded server?"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 11:37 AM, Mike Blackwell <[email protected]> wrote:\n>\n> On Tue, Jul 17, 2012 at 11:35 AM, Scott Marlowe <[email protected]>\n> wrote:\n>\n>> I'd run vmstat and look for high cs or int numbers (100k and above) to\n>> see if you're maybe seeing an issue with that. A lot of times a\n>> \"slow\" server is just too much process switching. But yeah, the\n>> graphs you've posted don't seem overly bad.\n>\n>\n>\n> Thanks for the tip. Here's a quick look at those numbers under that same\n> load. Watching it for a while longer didn't show any spikes. That doesn't\n> seem to be it, either.\n\nYep it all looks good to me. Are you sure you're not getting network\nlag or something like that?\n",
"msg_date": "Tue, 17 Jul 2012 11:49:51 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow application response on lightly loaded server?"
},
{
"msg_contents": "I'm wondering about that. However, the database server and the server\ndoing the bulk loads are on the same subnet. Traceroute shows only a\nsingle hop. Traceroute and ping both show reply times in the area of .25 -\n.50 ms or so. Is that reasonable?\n\n__________________________________________________________________________________\n*Mike Blackwell | Technical Analyst, Distribution Services/Rollout\nManagement | RR Donnelley*\n1750 Wallace Ave | St Charles, IL 60174-3401\nOffice: 630.313.7818\[email protected]\nhttp://www.rrdonnelley.com\n\n\n<http://www.rrdonnelley.com/>\n* <[email protected]>*\n\n\nOn Tue, Jul 17, 2012 at 12:49 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Tue, Jul 17, 2012 at 11:37 AM, Mike Blackwell <[email protected]>\n> wrote:\n> >\n> > On Tue, Jul 17, 2012 at 11:35 AM, Scott Marlowe <[email protected]\n> >\n> > wrote:\n> >\n> >> I'd run vmstat and look for high cs or int numbers (100k and above) to\n> >> see if you're maybe seeing an issue with that. A lot of times a\n> >> \"slow\" server is just too much process switching. But yeah, the\n> >> graphs you've posted don't seem overly bad.\n> >\n> >\n> >\n> > Thanks for the tip. Here's a quick look at those numbers under that same\n> > load. Watching it for a while longer didn't show any spikes. That\n> doesn't\n> > seem to be it, either.\n>\n> Yep it all looks good to me. Are you sure you're not getting network\n> lag or something like that?\n>\n\nI'm wondering about that. However, the database server and the server doing the bulk loads are on the same subnet. Traceroute shows only a single hop. Traceroute and ping both show reply times in the area of .25 - .50 ms or so. Is that reasonable?\n__________________________________________________________________________________\nMike Blackwell | Technical Analyst, Distribution Services/Rollout Management | RR Donnelley\n1750 Wallace Ave | St Charles, IL 60174-3401 \nOffice: 630.313.7818 \[email protected]\nhttp://www.rrdonnelley.com\n\nOn Tue, Jul 17, 2012 at 12:49 PM, Scott Marlowe <[email protected]> wrote:\nOn Tue, Jul 17, 2012 at 11:37 AM, Mike Blackwell <[email protected]> wrote:\n>\n> On Tue, Jul 17, 2012 at 11:35 AM, Scott Marlowe <[email protected]>\n> wrote:\n>\n>> I'd run vmstat and look for high cs or int numbers (100k and above) to\n>> see if you're maybe seeing an issue with that. A lot of times a\n>> \"slow\" server is just too much process switching. But yeah, the\n>> graphs you've posted don't seem overly bad.\n>\n>\n>\n> Thanks for the tip. Here's a quick look at those numbers under that same\n> load. Watching it for a while longer didn't show any spikes. That doesn't\n> seem to be it, either.\n\nYep it all looks good to me. Are you sure you're not getting network\nlag or something like that?",
"msg_date": "Tue, 17 Jul 2012 13:10:31 -0500",
"msg_from": "Mike Blackwell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow application response on lightly loaded server?"
},
{
"msg_contents": "Yeah seems reasonable. The last thing I'd look at is something like\nimproperly configured dns service. Are you connecting by IP or by\nhost name?\n\nOn Tue, Jul 17, 2012 at 12:10 PM, Mike Blackwell <[email protected]> wrote:\n> I'm wondering about that. However, the database server and the server doing\n> the bulk loads are on the same subnet. Traceroute shows only a single hop.\n> Traceroute and ping both show reply times in the area of .25 - .50 ms or so.\n> Is that reasonable?\n>\n> __________________________________________________________________________________\n> Mike Blackwell | Technical Analyst, Distribution Services/Rollout Management\n> | RR Donnelley\n> 1750 Wallace Ave | St Charles, IL 60174-3401\n> Office: 630.313.7818\n> [email protected]\n> http://www.rrdonnelley.com\n>\n>\n>\n>\n>\n> On Tue, Jul 17, 2012 at 12:49 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Tue, Jul 17, 2012 at 11:37 AM, Mike Blackwell <[email protected]>\n>> wrote:\n>> >\n>> > On Tue, Jul 17, 2012 at 11:35 AM, Scott Marlowe\n>> > <[email protected]>\n>> > wrote:\n>> >\n>> >> I'd run vmstat and look for high cs or int numbers (100k and above) to\n>> >> see if you're maybe seeing an issue with that. A lot of times a\n>> >> \"slow\" server is just too much process switching. But yeah, the\n>> >> graphs you've posted don't seem overly bad.\n>> >\n>> >\n>> >\n>> > Thanks for the tip. Here's a quick look at those numbers under that\n>> > same\n>> > load. Watching it for a while longer didn't show any spikes. That\n>> > doesn't\n>> > seem to be it, either.\n>>\n>> Yep it all looks good to me. Are you sure you're not getting network\n>> lag or something like that?\n>\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Tue, 17 Jul 2012 13:36:19 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow application response on lightly loaded server?"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 2:36 PM, Scott Marlowe <[email protected]>\n wrote:\n\n\n> Yeah seems reasonable. The last thing I'd look at is something like\n> improperly configured dns service. Are you connecting by IP or by\n> host name?\n>\n>\nInteresting possibility. We're currently connecting by host name. I could\ntry temporarily using the IP from one of the servers to see if that helps.\n I'm not familiar enough with DNS services to do any diagnostics other than\nusing dig to see where something points.\n\nThanks for your help, BTW!\n\n__________________________________________________________________________________\n*Mike Blackwell | Technical Analyst, Distribution Services/Rollout\nManagement | RR Donnelley*\n1750 Wallace Ave | St Charles, IL 60174-3401\nOffice: 630.313.7818\[email protected]\nhttp://www.rrdonnelley.com\n\nOn Tue, Jul 17, 2012 at 2:36 PM, Scott Marlowe <[email protected]> wrote: \nYeah seems reasonable. The last thing I'd look at is something likeimproperly configured dns service. Are you connecting by IP or byhost name?\n Interesting possibility. We're currently connecting by host name. I could try temporarily using the IP from one of the servers to see if that helps. I'm not familiar enough with DNS services to do any diagnostics other than using dig to see where something points.\nThanks for your help, BTW!__________________________________________________________________________________\nMike Blackwell | Technical Analyst, Distribution Services/Rollout Management | RR Donnelley\n1750 Wallace Ave | St Charles, IL 60174-3401 \nOffice: 630.313.7818 \[email protected]\nhttp://www.rrdonnelley.com",
"msg_date": "Tue, 17 Jul 2012 14:48:26 -0500",
"msg_from": "Mike Blackwell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow application response on lightly loaded server?"
},
{
"msg_contents": "Well if it suddenly gets faster when connecting by IP, you'll know\nwhere your problem lies. DNS issues are more common in windows\ninstalls, due to Windows having more interesting ways to misconfigure\ndns etc.\n\nOn Tue, Jul 17, 2012 at 1:48 PM, Mike Blackwell <[email protected]> wrote:\n> On Tue, Jul 17, 2012 at 2:36 PM, Scott Marlowe <[email protected]>\n> wrote:\n>\n>\n>>\n>> Yeah seems reasonable. The last thing I'd look at is something like\n>> improperly configured dns service. Are you connecting by IP or by\n>> host name?\n>>\n>\n> Interesting possibility. We're currently connecting by host name. I could\n> try temporarily using the IP from one of the servers to see if that helps.\n> I'm not familiar enough with DNS services to do any diagnostics other than\n> using dig to see where something points.\n>\n> Thanks for your help, BTW!\n>\n> __________________________________________________________________________________\n> Mike Blackwell | Technical Analyst, Distribution Services/Rollout Management\n> | RR Donnelley\n> 1750 Wallace Ave | St Charles, IL 60174-3401\n> Office: 630.313.7818\n> [email protected]\n> http://www.rrdonnelley.com\n>\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Tue, 17 Jul 2012 15:00:30 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow application response on lightly loaded server?"
}
] |
[
{
"msg_contents": "Newer Linux systems with lots of cores have a problem I've been running \ninto a lot more lately I wanted to share initial notes on. My \"newer\" \nmeans running the 2.6.32 kernel or later, since I mostly track \n\"enterprise\" Linux distributions like RHEL6 and Debian Squeeze. The \nissue is around Linux's zone_reclaim feature. When it pops up, turning \nthat feature off help a lot. Details on what I understand of the \nproblem are below, and as always things may have changed already in even \nnewer kernels.\n\nzone_reclaim tries to optimize memory speed on NUMA systems with more \nthan one CPU socket. There some banks of memory that can be \"closer\" to \na particular socket, as measured by transfer rate, because of how the \nmemory is routed to the various cores on each socket. There is no true \ndefault for this setting. Linux checks the hardware and turns this \non/off based on what transfer rate it sees between NUMA nodes, where \nthere are more than one and its test shows some distance between them. \nYou can tell if this is turned on like this:\n\necho /proc/sys/vm/zone_reclaim_mode\n\nWhere 1 means it's enabled. Install the numactl utility and you can see \nwhy it's made that decision:\n\n# numactl --hardware\navailable: 2 nodes (0-1)\nnode 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17\nnode 0 size: 73718 MB\nnode 0 free: 419 MB\nnode 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23\nnode 1 size: 73728 MB\nnode 1 free: 30 MB\nnode distances:\nnode 0 1\n 0: 10 21\n 1: 21 10\n\nNote how the \"distance\" for a transfer from node 0->0 or 1->1 is 10 \nunits, while 0->1 or 1->0 is 21. That what's tested at boot time, where \nthe benchmarked speed is turned into this abstract distance number. And \nif there is a large difference in cross-zone timing, then zone reclaim \nis enabled.\n\nScott Marlowe has been griping about this on the mailing lists here for \na while now, and it's increasingly trouble for systems I've been seeing \nlately too. This is a well known problem with MySQL: \nhttp://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/ \nand NUMA issues have impacted Oracle too. On PostgreSQL shared_buffers \nisn't normally set as high as MySQL's buffer cache, making it a bit less \nvulnerable to this class of problem. But it's surely still a big \nproblem for PostgreSQL on some systems.\n\nI've taken to disabling /proc/sys/vm/zone_reclaim_mode on any Linux \nsystem where it's turned on now. I'm still working through whether it \nalso makes sense in all cases to use the more complicated memory \ninterleaving suggestions that MySQL users have implemented, something \nmost people would need to push into their PostgreSQL server started up \nscripts in /etc/init.d (That will be a fun rpm/deb packaging issue to \ndeal with if this becomes more wide-spread) Suggestions on whether that \nis necessary, or if just disabling zone_reclaim is enough, are welcome \nfrom anyone who wants to try and benchmark it.\n\nNote that this is all tricky to test because some of the bad behavior \nonly happens when the server runs this zone reclaim method, which isn't \na trivial situation to create at will. Servers that have this problem \ntend to have it pop up intermittently, you'll see one incredibly slow \nquery periodically while most are fast. All depends on exactly what \ncore is executing, where the memory it needs is at, and whether the \nserver wants to reclaim memory (and just what that means its own \ncomplicated topic) as part of that.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n",
"msg_date": "Tue, 17 Jul 2012 21:52:11 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux memory zone reclaim"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 7:52 PM, Greg Smith <[email protected]> wrote:\n> Newer Linux systems with lots of cores have a problem I've been running into\n> a lot more lately I wanted to share initial notes on. My \"newer\" means\n> running the 2.6.32 kernel or later, since I mostly track \"enterprise\" Linux\n> distributions like RHEL6 and Debian Squeeze. The issue is around Linux's\n> zone_reclaim feature. When it pops up, turning that feature off help a lot.\n> Details on what I understand of the problem are below, and as always things\n> may have changed already in even newer kernels.\n\nSNIP\n\n> Scott Marlowe has been griping about this on the mailing lists here for a\n> while now, and it's increasingly trouble for systems I've been seeing lately\n> too. This is a well known problem with MySQL:\n> http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/\n\nThanks for the link, I'll read up on it. I do have access to large\n(24 to 40 core) NUMA machines so I might try some benchmarking on them\nto see how they work.\n",
"msg_date": "Tue, 17 Jul 2012 20:00:35 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On the larger, cellular Itanium systems with multiple motherboards (rx6600\nto Superdome) Oracle has done a lot of tuning with the HP-UX kernel calls\nto optimize for NUMA issues. Will be interesting to see what they bring to\nLinux.\nOn Jul 17, 2012 9:01 PM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> On Tue, Jul 17, 2012 at 7:52 PM, Greg Smith <[email protected]> wrote:\n> > Newer Linux systems with lots of cores have a problem I've been running\n> into\n> > a lot more lately I wanted to share initial notes on. My \"newer\" means\n> > running the 2.6.32 kernel or later, since I mostly track \"enterprise\"\n> Linux\n> > distributions like RHEL6 and Debian Squeeze. The issue is around Linux's\n> > zone_reclaim feature. When it pops up, turning that feature off help a\n> lot.\n> > Details on what I understand of the problem are below, and as always\n> things\n> > may have changed already in even newer kernels.\n>\n> SNIP\n>\n> > Scott Marlowe has been griping about this on the mailing lists here for a\n> > while now, and it's increasingly trouble for systems I've been seeing\n> lately\n> > too. This is a well known problem with MySQL:\n> >\n> http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/\n>\n> Thanks for the link, I'll read up on it. I do have access to large\n> (24 to 40 core) NUMA machines so I might try some benchmarking on them\n> to see how they work.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn the larger, cellular Itanium systems with multiple motherboards (rx6600 to Superdome) Oracle has done a lot of tuning with the HP-UX kernel calls to optimize for NUMA issues. Will be interesting to see what they bring to Linux.\nOn Jul 17, 2012 9:01 PM, \"Scott Marlowe\" <[email protected]> wrote:\nOn Tue, Jul 17, 2012 at 7:52 PM, Greg Smith <[email protected]> wrote:\n> Newer Linux systems with lots of cores have a problem I've been running into\n> a lot more lately I wanted to share initial notes on. My \"newer\" means\n> running the 2.6.32 kernel or later, since I mostly track \"enterprise\" Linux\n> distributions like RHEL6 and Debian Squeeze. The issue is around Linux's\n> zone_reclaim feature. When it pops up, turning that feature off help a lot.\n> Details on what I understand of the problem are below, and as always things\n> may have changed already in even newer kernels.\n\nSNIP\n\n> Scott Marlowe has been griping about this on the mailing lists here for a\n> while now, and it's increasingly trouble for systems I've been seeing lately\n> too. This is a well known problem with MySQL:\n> http://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/\n\nThanks for the link, I'll read up on it. I do have access to large\n(24 to 40 core) NUMA machines so I might try some benchmarking on them\nto see how they work.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 17 Jul 2012 22:51:35 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 11:00 PM, Scott Marlowe <[email protected]> wrote:\n>\n> Thanks for the link, I'll read up on it. I do have access to large\n> (24 to 40 core) NUMA machines so I might try some benchmarking on them\n> to see how they work.\n\nIt must have been said already, but I'll repeat it just in case:\n\nI think postgres has an easy solution. Spawn the postmaster with\n\"interleave\", to allocate shared memory, and then switch to \"local\" on\nthe backends.\n",
"msg_date": "Wed, 18 Jul 2012 02:38:32 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On Tue, Jul 18, 2012 at 2:38 AM, Claudio Freire wrote:\n >It must have been said already, but I'll repeat it just in case:\n\n >I think postgres has an easy solution. Spawn the postmaster with\n >\"interleave\", to allocate shared memory, and then switch to \"local\" on\n >the backends.\n\nDo you have a suggestion about how to do that? I'm running Ubuntu 12.04 \nand PG 9.1, I've modified pg_ctlcluster to cause pg_ctl to use a wrapper \nscript which starts the postmaster using a numactl wrapper, but all \nsubsequent client processes are started with interleaving enabled as \nwell. Any ideas how to make just the postmaster process start with \ninterleaving?\n\nThanks\n\n",
"msg_date": "Tue, 24 Jul 2012 19:36:25 +0100",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On Tue, Jul 24, 2012 at 3:36 PM, John Lister <[email protected]> wrote:\n> Do you have a suggestion about how to do that? I'm running Ubuntu 12.04 and\n> PG 9.1, I've modified pg_ctlcluster to cause pg_ctl to use a wrapper script\n> which starts the postmaster using a numactl wrapper, but all subsequent\n> client processes are started with interleaving enabled as well. Any ideas\n> how to make just the postmaster process start with interleaving?\n\npostmaster should call numactl right after forking:\nhttp://linux.die.net/man/2/set_mempolicy\n",
"msg_date": "Tue, 24 Jul 2012 15:41:46 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On Tue, Jul 24, 2012 at 3:41 PM, Claudio Freire <[email protected]> wrote:\n> On Tue, Jul 24, 2012 at 3:36 PM, John Lister <[email protected]> wrote:\n>> Do you have a suggestion about how to do that? I'm running Ubuntu 12.04 and\n>> PG 9.1, I've modified pg_ctlcluster to cause pg_ctl to use a wrapper script\n>> which starts the postmaster using a numactl wrapper, but all subsequent\n>> client processes are started with interleaving enabled as well. Any ideas\n>> how to make just the postmaster process start with interleaving?\n>\n> postmaster should call numactl right after forking:\n> http://linux.die.net/man/2/set_mempolicy\n\nSomething like the attached patch (untested)",
"msg_date": "Tue, 24 Jul 2012 17:12:20 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On Tue, Jul 24, 2012 at 5:12 PM, Claudio Freire <[email protected]> wrote:\n> Something like the attached patch (untested)\n\nSorry, on that patch, MPOL_INTERLEAVE should be MPOL_DEFAULT\n",
"msg_date": "Tue, 24 Jul 2012 18:00:01 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On 24/07/2012 21:12, Claudio Freire wrote:\n> On Tue, Jul 24, 2012 at 3:41 PM, Claudio Freire <[email protected]> wrote:\n>> On Tue, Jul 24, 2012 at 3:36 PM, John Lister <[email protected]> wrote:\n>>> Do you have a suggestion about how to do that? I'm running Ubuntu 12.04 and\n>>> PG 9.1, I've modified pg_ctlcluster to cause pg_ctl to use a wrapper script\n>>> which starts the postmaster using a numactl wrapper, but all subsequent\n>>> client processes are started with interleaving enabled as well. Any ideas\n>>> how to make just the postmaster process start with interleaving?\n>> postmaster should call numactl right after forking:\n>> http://linux.die.net/man/2/set_mempolicy\n> Something like the attached patch (untested)\nCheers, I'll give it a go, I wonder if this is likely to be integrated \ninto the main code? As has been mentioned here before, postgresql isn't \nas badly affected as mysql for example, but I'm wondering if the trend \nto larger memory and more cores/nodes means it should be offered as an \noption? Although saying that I've read that 10Gb of shared buffers may \nbe enough even in big machines 128+Gb ram..\n\nThoughts?\n\nJohn\n\n\n",
"msg_date": "Tue, 24 Jul 2012 22:23:05 +0100",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On Tue, Jul 24, 2012 at 6:23 PM, John Lister <[email protected]> wrote:\n> Cheers, I'll give it a go, I wonder if this is likely to be integrated into\n> the main code? As has been mentioned here before, postgresql isn't as badly\n> affected as mysql for example, but I'm wondering if the trend to larger\n> memory and more cores/nodes means it should be offered as an option?\n> Although saying that I've read that 10Gb of shared buffers may be enough\n> even in big machines 128+Gb ram..\n\nRemember to change MPOL_INTERLEAVED to MPOL_DEFAULT ;-)\n\nI'm trying to test it myself\n",
"msg_date": "Tue, 24 Jul 2012 18:37:44 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "My experience is that disabling swap and turning off zone_reclaim_mode\ngets rid of any real problem for a large memory postgresql database\nserver. While it would be great to have a NUMA aware pgsql, I\nquestion the solidity and reliability of the current linux kernel\nimplementation in a NUMA evironment, especially given the poor\nbehaviour of the linux kernel as regards swap behaviour.\n\nOn Tue, Jul 24, 2012 at 3:23 PM, John Lister <[email protected]> wrote:\n> On 24/07/2012 21:12, Claudio Freire wrote:\n>>\n>> On Tue, Jul 24, 2012 at 3:41 PM, Claudio Freire <[email protected]>\n>> wrote:\n>>>\n>>> On Tue, Jul 24, 2012 at 3:36 PM, John Lister <[email protected]>\n>>> wrote:\n>>>>\n>>>> Do you have a suggestion about how to do that? I'm running Ubuntu 12.04\n>>>> and\n>>>> PG 9.1, I've modified pg_ctlcluster to cause pg_ctl to use a wrapper\n>>>> script\n>>>> which starts the postmaster using a numactl wrapper, but all subsequent\n>>>> client processes are started with interleaving enabled as well. Any\n>>>> ideas\n>>>> how to make just the postmaster process start with interleaving?\n>>>\n>>> postmaster should call numactl right after forking:\n>>> http://linux.die.net/man/2/set_mempolicy\n>>\n>> Something like the attached patch (untested)\n>\n> Cheers, I'll give it a go, I wonder if this is likely to be integrated into\n> the main code? As has been mentioned here before, postgresql isn't as badly\n> affected as mysql for example, but I'm wondering if the trend to larger\n> memory and more cores/nodes means it should be offered as an option?\n> Although saying that I've read that 10Gb of shared buffers may be enough\n> even in big machines 128+Gb ram..\n>\n> Thoughts?\n>\n> John\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Tue, 24 Jul 2012 15:57:21 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On Tue, Jul 24, 2012 at 6:23 PM, John Lister <[email protected]> wrote:\n> Cheers, I'll give it a go, I wonder if this is likely to be integrated into\n> the main code? As has been mentioned here before, postgresql isn't as badly\n> affected as mysql for example, but I'm wondering if the trend to larger\n> memory and more cores/nodes means it should be offered as an option?\n> Although saying that I've read that 10Gb of shared buffers may be enough\n> even in big machines 128+Gb ram..\n>\n> Thoughts?\n\nThe attached (better) patch builds and doesn't crash at least.\nWhich is always good.\n\nConfigure with --with-numa",
"msg_date": "Tue, 24 Jul 2012 19:05:56 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On Tue, Jul 17, 2012 at 09:52:11PM -0400, Greg Smith wrote:\n> I've taken to disabling /proc/sys/vm/zone_reclaim_mode on any Linux\n> system where it's turned on now. I'm still working through whether\n --------------------------------\n> it also makes sense in all cases to use the more complicated memory\n> interleaving suggestions that MySQL users have implemented,\n> something most people would need to push into their PostgreSQL\n> server started up scripts in /etc/init.d (That will be a fun\n> rpm/deb packaging issue to deal with if this becomes more\n> wide-spread) Suggestions on whether that is necessary, or if just\n> disabling zone_reclaim is enough, are welcome from anyone who wants\n> to try and benchmark it.\n\nShould I be turning it off on my server too? It is enabled on my\nsystem.\n\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 26 Jul 2012 20:25:54 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "Greg Smith <[email protected]> wrote:\n \n> You can tell if this is turned on like this:\n> \n> echo /proc/sys/vm/zone_reclaim_mode\n \nAs a data point, the benchmarks I did for some of the 9.2\nscalability features does not appear to have this turned on:\n \n# cat /proc/sys/vm/zone_reclaim_mode\n0\n \nOur Linux version:\n \nLinux version 2.6.32.46-0.3-default (geeko@buildhost) (gcc version\n4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP\n2011-09-29 17:49:31 +0200\n \nThis has 32 cores (64 \"threads\" with HT) on 4 Xeon X7560 CPUs. \n \nIntel(R) Xeon(R) CPU X7560 @ 2.27GHz\n \nIt has 256GB RAM on 4GB DIMMs, with each core controlling 2 DIMMs\nand each core able to directly talk to every other core. So, it\nis non-uniform, but with this arrangement it is more a matter that\nthere is an 8GB set of memory that is \"fast\" for each core and the\nother 97% of RAM is all accessible at the same speed. There were\nsome other options for what to install on this system or how to\ninstall it which wouldn't have kept things this tight.\n \n> Install the numactl utility and you can see why it's made that\n> decision:\n \nWe get this:\n \n# numactl --hardware\navailable: 4 nodes (0-3)\nnode 0 cpus: 0 1 2 3 4 5 6 7 32 33 34 35 36 37 38 39\nnode 0 size: 65519 MB\nnode 0 free: 283 MB\nnode 1 cpus: 8 9 10 11 12 13 14 15 40 41 42 43 44 45 46 47\nnode 1 size: 65536 MB\nnode 1 free: 25 MB\nnode 2 cpus: 16 17 18 19 20 21 22 23 48 49 50 51 52 53 54 55\nnode 2 size: 65536 MB\nnode 2 free: 26 MB\nnode 3 cpus: 24 25 26 27 28 29 30 31 56 57 58 59 60 61 62 63\nnode 3 size: 65536 MB\nnode 3 free: 25 MB\nnode distances:\nnode 0 1 2 3 \n 0: 10 11 11 11 \n 1: 11 10 11 11 \n 2: 11 11 10 11 \n 3: 11 11 11 10 \n \nWhen considering a hardware purchase, it might be wise to pay close\nattention to how \"far\" a core may need to go to get to the most\n\"distant\" RAM.\n \n-Kevin\n",
"msg_date": "Mon, 30 Jul 2012 11:43:36 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On Mon, Jul 30, 2012 at 10:43 AM, Kevin Grittner\n<[email protected]> wrote:\n> node distances:\n> node 0 1 2 3\n> 0: 10 11 11 11\n> 1: 11 10 11 11\n> 2: 11 11 10 11\n> 3: 11 11 11 10\n>\n> When considering a hardware purchase, it might be wise to pay close\n> attention to how \"far\" a core may need to go to get to the most\n> \"distant\" RAM.\n\nI think the zone_reclaim gets turned on with a high ratio. If the\ninter node costs were the same, and the intranode costs dropped in\nhalf, zone reclaim would likely get turned on at boot time.\n\nI had something similar in a 48 core system but if I recall correctly\nthe matrix was 8x8 and the cost differential was much higher.\n\nThe symptoms I saw was that a very hard working db, on a 128G machine\nwith about 95G as OS / kernel cache, would slow to a crawl with kswapd\nworking very hard (I think it was kswapd) after a period of 1 to 3\nweeks. Note that actual swap in and out wasn't all that great by\nvmstat. The same performance hit happened on a similar machine used\nas a file server after a similar period of warm up.\n\nThe real danger here is that the misbehavior can take a long time to\nshow up, and from what I read at the time, the performance gain for\nany zone reclaim = 1 was minimal for a file or db server, and more in\nline for a large virtual machine farm, with a lot of processes chopped\ninto sections small enough to fit in one node's memory and not need a\nlot of access from another node. Anything that relies on the OS to\ncache is likely not served by zone reclaim = 1.\n",
"msg_date": "Mon, 30 Jul 2012 11:09:35 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "> node distances:\n> node 0 1 2 3 \n> 0: 10 11 11 11 \n> 1: 11 10 11 11 \n> 2: 11 11 10 11 \n> 3: 11 11 11 10 \n> \n> When considering a hardware purchase, it might be wise to pay close\n> attention to how \"far\" a core may need to go to get to the most\n> \"distant\" RAM.\n\nYikes, my server is certainly asymmetric:\n\n\tnode distances:\n\tnode 0 1\n\t 0: 10 21\n\t 1: 21 10\n\nand my Debian Squeeze certainly knows that:\n\t\n\t$ cat < /proc/sys/vm/zone_reclaim_mode\n\t1\n\nServer specs:\n\n\thttp://momjian.us/main/blogs/pgblog/2012.html#January_20_2012\n\nI have 12 2GB DDR3 DIMs.\n\nOf course, my home server is ridiculously idle too. :-)\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Mon, 30 Jul 2012 13:26:58 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On 7/30/12 10:09 AM, Scott Marlowe wrote:\n> I think the zone_reclaim gets turned on with a high ratio. If the\n> inter node costs were the same, and the intranode costs dropped in\n> half, zone reclaim would likely get turned on at boot time.\n\nWe've been seeing a major problem with zone_reclaim and Linux, in that\nLinux won't use the FS cache on the \"distant\" RAM *at all* if it thinks\nthat RAM is distant enough. Thus, you get instances of seeing only half\nof RAM used for FS cache, even though the database is 5X larger than RAM.\n\nThis is poor design on Linux's part, since even the distant RAM is\nfaster than disk. For now, we've been disabling zone_reclaim entirely.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 03 Aug 2012 15:30:22 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "On Fri, Aug 3, 2012 at 4:30 PM, Josh Berkus <[email protected]> wrote:\n> On 7/30/12 10:09 AM, Scott Marlowe wrote:\n>> I think the zone_reclaim gets turned on with a high ratio. If the\n>> inter node costs were the same, and the intranode costs dropped in\n>> half, zone reclaim would likely get turned on at boot time.\n>\n> We've been seeing a major problem with zone_reclaim and Linux, in that\n> Linux won't use the FS cache on the \"distant\" RAM *at all* if it thinks\n> that RAM is distant enough. Thus, you get instances of seeing only half\n> of RAM used for FS cache, even though the database is 5X larger than RAM.\n>\n> This is poor design on Linux's part, since even the distant RAM is\n> faster than disk. For now, we've been disabling zone_reclaim entirely.\n\nI haven't run into this, but we were running ubuntu 10.04 LTS. What\nkernel were you running when this happened? I'd love to see a test\ncase on this, as it seems like a major regression if it's on newer\nkernels, and we're looking at running 12.04 LTS soon on one of our\nbigger machines.\n",
"msg_date": "Fri, 3 Aug 2012 17:02:38 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
},
{
"msg_contents": "\n>> This is poor design on Linux's part, since even the distant RAM is\n>> faster than disk. For now, we've been disabling zone_reclaim entirely.\n> \n> I haven't run into this, but we were running ubuntu 10.04 LTS. What\n> kernel were you running when this happened? I'd love to see a test\n> case on this, as it seems like a major regression if it's on newer\n> kernels, and we're looking at running 12.04 LTS soon on one of our\n> bigger machines.\n\nJeff Frost will have a blog up about it later; we're still collecting data.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 03 Aug 2012 16:15:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux memory zone reclaim"
}
] |
[
{
"msg_contents": "Hi, I was wondering if there are any recommended ways or tools for \ncalculating the planner cost constants? Also, do the absolute values \nmatter or is it simply the ratio between them? I'm about to configure a \nnew server and can probably do a rough job of calculating them based on \nsupposed speeds of the various components but wondered if anyone uses \nmore accurate methods?\n\nThanks\n\nJohn\n",
"msg_date": "Wed, 18 Jul 2012 13:18:25 +0100",
"msg_from": "John Lister <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql query cost values/estimates"
}
] |
[
{
"msg_contents": "PostgreSQL: 9.1\nOS: Red Hat 6\nThis PostgreSQL instance is used for dynamic web content. It runs on a dedicated server.\n\nSo I need some PostgreSQL monitoring advice. There are two basic strategies that I am aware of for configuring PostgreSQL:\n\n\n1) In Memory: With an in memory option you give PostgreSQL 70% or more of the memory by setting the shared buffers. You are relying on PostgreSQL to put into memory the information within the database. The only access to the disk from my understanding should be for the initial read of data into a block of memory and when updates are made to data blocks. The advantage of this strategy is that if you notice an increase in the Linux swap file then you know you need to increase the memory on the server as well as PostgreSQL.\n\n2) Disk Caching: With this approach you are relying on the operating system to cache disk files in memory. PostgreSQL will scan the disk cache for the data it needs. In order to use this strategy you set the amount of shared buffers to a low number like 1G or less. You also want to make sure to set effective cache size to the amount of memory that you expect your server's OS to use for disk caching. The only major drawback for me with this strategy is \"how do I know when I need more memory for the OS to use when caching my files?\"\n\nIf I were to use option #2 above what type of monitoring would you suggest I use to tell me when I need to add more memory?\nThanks,\n\nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL: 9.1\nOS: Red Hat 6\nThis PostgreSQL instance is used for dynamic web content. It runs on a dedicated server. \n\n \nSo I need some PostgreSQL monitoring advice. There are two basic strategies that I am aware of for configuring PostgreSQL:\n \n1) \nIn Memory: With an in memory option you give PostgreSQL 70% or more of the memory by setting the shared buffers. You are relying on PostgreSQL to put into memory the information within the database. The only access to the disk from\n my understanding should be for the initial read of data into a block of memory and when updates are made to data blocks. The advantage of this strategy is that if you notice an increase in the Linux swap file then you know you need to increase the memory\n on the server as well as PostgreSQL. \n2) \nDisk Caching: With this approach you are relying on the operating system to cache disk files in memory. PostgreSQL will scan the disk cache for the data it needs. In order to use this strategy you set the amount of shared buffers\n to a low number like 1G or less. You also want to make sure to set effective cache size to the amount of memory that you expect your server’s OS to use for disk caching. The only major drawback for me with this strategy is “how do I know when I need more\n memory for the OS to use when caching my files?”\n \nIf I were to use option #2 above what type of monitoring would you suggest I use to tell me when I need to add more memory?\n\nThanks,\n \nLance Campbell\nSoftware Architect\nWeb Services at Public Affairs\n217-333-0382",
"msg_date": "Wed, 18 Jul 2012 14:27:16 +0000",
"msg_from": "\"Campbell, Lance\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "monitoring suggestions"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm surprised at the difference in speed/execution plan between two logically equivalent queries, one using IN, the other using EXISTS. (At least I think they are logically equivalent)\n\nI've created a small setup that illustrates what I mean.\n\nConsider the following tables:\n\nCREATE TABLE foo\n(\n id integer NOT NULL,\n CONSTRAINT foo_pkey PRIMARY KEY (id )\n)\n\nCREATE TABLE bar\n(\n foo_ref integer,\n value character varying,\n id integer NOT NULL,\n CONSTRAINT bar_pkey PRIMARY KEY (id ),\n CONSTRAINT bar_foo_ref_fkey FOREIGN KEY (foo_ref)\n REFERENCES foo (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\n\nThe following two queries have very different query plans:\n\nSELECT *\nFROM foo\nWHERE 'text6' IN (SELECT value\n FROM bar\n JOIN foo AS foo2\n ON bar.foo_ref = foo2.id\n WHERE foo2.id = foo.id)\n\nand\n\nSELECT *\nFROM foo\nWHERE EXISTS(SELECT 0\n FROM bar\n JOIN foo AS foo2\n ON bar.foo_ref = foo2.id\n WHERE foo2.id = foo.id\n AND bar.value = 'text6')\n\nWhereas the second one uses the indexes to look up the matching bar rows, the first one performs a full table scan on bar.\nGiven that both queries are logically equivalent, I'm wondering why this optimization isn't made. Is there information missing for the optimizer to make the better decision? Are these queries not equivalent perhaps?\n\nAn EXPLAIN ANALYZE on the two queries on filled and analyzed tables highlighting the difference:\n\nEXPLAIN ANALYZE SELECT * FROM foo WHERE 'text6' IN (SELECT value FROM bar JOIN foo AS foo2 ON bar.foo_ref = foo2.id WHERE foo2.id = foo.id)\n\n\"Seq Scan on foo (cost=0.00..3316934.60 rows=5000 width=4) (actual time=6.416..10803.056 rows=1 loops=1)\"\n\" Filter: (SubPlan 1)\"\n\" SubPlan 1\"\n\" -> Nested Loop (cost=0.00..663.29 rows=1 width=8) (actual time=0.667..1.079 rows=1 loops=10000)\"\n\" -> Seq Scan on bar (cost=0.00..655.00 rows=1 width=12) (actual time=0.660..1.072 rows=1 loops=10000)\"\n\" Filter: (foo_ref = foo.id)\"\n\" -> Index Scan using foo_pkey on foo foo2 (cost=0.00..8.28 rows=1 width=4) (actual time=0.002..0.003 rows=1 loops=10000)\"\n\" Index Cond: (id = foo.id)\"\n\"Total runtime: 10803.088 ms\"\n\n\nEXPLAIN ANALYZE SELECT * FROM foo WHERE EXISTS(SELECT 0 FROM bar JOIN foo AS foo2 ON bar.foo_ref = foo2.id WHERE foo2.id = foo.id AND bar.value = 'text6')\n\n\"Nested Loop (cost=16.58..24.88 rows=1 width=4) (actual time=0.032..0.032 rows=1 loops=1)\"\n\" -> HashAggregate (cost=16.58..16.59 rows=1 width=8) (actual time=0.029..0.029 rows=1 loops=1)\"\n\" -> Nested Loop (cost=0.00..16.58 rows=1 width=8) (actual time=0.025..0.025 rows=1 loops=1)\"\n\" -> Index Scan using bar_value_idx on bar (cost=0.00..8.29 rows=1 width=4) (actual time=0.019..0.020 rows=1 loops=1)\"\n\" Index Cond: ((value)::text = 'text6'::text)\"\n\" -> Index Scan using foo_pkey on foo foo2 (cost=0.00..8.28 rows=1 width=4) (actual time=0.002..0.003 rows=1 loops=1)\"\n\" Index Cond: (id = bar.foo_ref)\"\n\" -> Index Scan using foo_pkey on foo (cost=0.00..8.28 rows=1 width=4) (actual time=0.001..0.002 rows=1 loops=1)\"\n\" Index Cond: (id = bar.foo_ref)\"\n\"Total runtime: 0.064 ms\"\n\nHoping someone sheds some light on this and restores my confidence in the optimizer,\n\nNick Hofstede\n\nPS: I know the EXIST can also be rewritten to a JOIN\n\nSELECT foo.id\nFROM foo\n JOIN bar\n ON bar.foo_ref = foo.id\n JOIN foo AS foo2\n ON bar.foo_ref = foo2.id\nWHERE foo2.id = foo.id\n AND bar.value = 'text6'\n\nand ultimately to (thanks to foo2.id = foo.id)\n\nSELECT foo.id\nFROM foo\n JOIN bar\n ON bar.foo_ref = foo.id\nWHERE bar.value = 'text6'\n\n.. all of wich have an execution plan and performance similar to the EXISTS query.\nWhat I'm concerned about is that the first step from IN to EXISTS isn't made (which also precludes all following optimization steps)\n\n\n________________________________\n\nInventive Designers' Email Disclaimer:\nhttp://www.inventivedesigners.com/email-disclaimer\n",
"msg_date": "Wed, 18 Jul 2012 16:10:30 +0000",
"msg_from": "Nick Hofstede <[email protected]>",
"msg_from_op": true,
"msg_subject": "optimizing queries using IN and EXISTS"
},
{
"msg_contents": "On 18 July 2012 17:10, Nick Hofstede <[email protected]> wrote:\n> Hi,\n>\n> I'm surprised at the difference in speed/execution plan between two logically equivalent queries, one using IN, the other using EXISTS. (At least I think they are logically equivalent)\n\nThey are not logically equivalent.\n\nhttp://www.postgresql.org/docs/current/static/functions-subquery.html\n\nSee the notes about NULL under IN.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n",
"msg_date": "Wed, 18 Jul 2012 19:40:28 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing queries using IN and EXISTS"
},
{
"msg_contents": "I realize there is a case where IN returns NULL and EXISTS returns FALSE (in case there is a matching bar with bar.value set to NULL)\nIn that case however, this would result in the foo row not being included in the resultset, which is the same outcome in both cases.\nNOT IN vs NOT EXISTS is another story, I agree.\nSo even though the subqueries aren't logically equivalent, I still believe the queries in their totality are.\n\nIs this reasoning (knowing the NULL will be treated as FALSE by the WHERE) a bridge too far for the optimizer?\nI retested with bar.value declared as NOT NULL but that doesn't seem to help.\n\nEven though this is a bit disappointing, I think it gave me a feel of what the optimizer knows about and takes into consideration and what not ...\n\nWith kind regards,\n\nNick\n________________________________________\nVan: Peter Geoghegan [[email protected]]\nVerzonden: woensdag 18 juli 2012 20:40\nAan: Nick Hofstede\nCC: [email protected]\nOnderwerp: Re: [PERFORM] optimizing queries using IN and EXISTS\n\nOn 18 July 2012 17:10, Nick Hofstede <[email protected]> wrote:\n> Hi,\n>\n> I'm surprised at the difference in speed/execution plan between two logically equivalent queries, one using IN, the other using EXISTS. (At least I think they are logically equivalent)\n\nThey are not logically equivalent.\n\nhttp://www.postgresql.org/docs/current/static/functions-subquery.html\n\nSee the notes about NULL under IN.\n\n--\nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n--\nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n\n________________________________\n\nInventive Designers' Email Disclaimer:\nhttp://www.inventivedesigners.com/email-disclaimer\n",
"msg_date": "Wed, 18 Jul 2012 20:28:52 +0000",
"msg_from": "Nick Hofstede <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimizing queries using IN and EXISTS"
},
{
"msg_contents": "Nick Hofstede <[email protected]> writes:\n> I'm surprised at the difference in speed/execution plan between two logically equivalent queries, one using IN, the other using EXISTS. (At least I think they are logically equivalent)\n\n> SELECT *\n> FROM foo\n> WHERE 'text6' IN (SELECT value\n> FROM bar\n> JOIN foo AS foo2\n> ON bar.foo_ref = foo2.id\n> WHERE foo2.id = foo.id)\n\nHm. convert_ANY_sublink_to_join() rejects subqueries that contain any\nVars of the parent query level, so the reference to foo.id prevents this\nfrom being converted to a semijoin. However, it seems like that's\noverly restrictive. I'm not sure that we could remove the test\naltogether, but at least outer vars used in WHERE seem safe.\n\nIn the meantime, you can recast like this:\n\nSELECT *\nFROM foo\nWHERE ('text6', id) IN (SELECT value, foo2.id\n FROM bar\n JOIN foo AS foo2\n ON bar.foo_ref = foo2.id)\n\nand still get a semijoin plan from an IN-style query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Jul 2012 18:36:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing queries using IN and EXISTS"
},
{
"msg_contents": "Interesting.\nThanks for the work-around.\n\nRegards,\n\nNick\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Tom Lane\nSent: donderdag 19 juli 2012 0:36\nTo: Nick Hofstede\nCc: [email protected]\nSubject: Re: [PERFORM] optimizing queries using IN and EXISTS\n\nNick Hofstede <[email protected]> writes:\n> I'm surprised at the difference in speed/execution plan between two\n> logically equivalent queries, one using IN, the other using EXISTS.\n> (At least I think they are logically equivalent)\n\n> SELECT *\n> FROM foo\n> WHERE 'text6' IN (SELECT value\n> FROM bar\n> JOIN foo AS foo2\n> ON bar.foo_ref = foo2.id\n> WHERE foo2.id = foo.id)\n\nHm. convert_ANY_sublink_to_join() rejects subqueries that contain any Vars of the parent query level, so the reference to foo.id prevents this from being converted to a semijoin. However, it seems like that's overly restrictive. I'm not sure that we could remove the test altogether, but at least outer vars used in WHERE seem safe.\n\nIn the meantime, you can recast like this:\n\nSELECT *\nFROM foo\nWHERE ('text6', id) IN (SELECT value, foo2.id\n FROM bar\n JOIN foo AS foo2\n ON bar.foo_ref = foo2.id)\n\nand still get a semijoin plan from an IN-style query.\n\n regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n--\nThis message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.\n\n\n________________________________\n\nInventive Designers' Email Disclaimer:\nhttp://www.inventivedesigners.com/email-disclaimer\n",
"msg_date": "Thu, 19 Jul 2012 11:53:43 +0000",
"msg_from": "Nick Hofstede <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimizing queries using IN and EXISTS"
}
] |
[
{
"msg_contents": "Hi, I am running a 9.1 server at Ubuntu. When I upgraded to the current version \nI did a pg_dump followed by pg_restore and found that the db was much faster. \nBut slowed down again after two days. I did the dump-restore again and could now \ncompare the two (actually identical) databases. This is a query of the old one \ndirectly after a VACUUM ANALYSE:\n\n QUERY PLAN\n--------------------------------------------------------------------------------\n------------------------------------------------------------------------\n WindowAgg (cost=2231.56..2232.17 rows=22 width=59) (actual \ntime=16748.382..16749.203 rows=340 loops=1)\n -> Sort (cost=2231.56..2231.62 rows=22 width=59) (actual \ntime=16748.360..16748.575 rows=340 loops=1)\n Sort Key: ba.bookid, (CASE WHEN (e.languageid = 123) THEN 1 WHEN \n(e.languageid = 401) THEN 2 WHEN (e.languageid = 150) THEN 3 ELSE 4 END)\n Sort Method: quicksort Memory: 60kB\n -> Nested Loop (cost=0.00..2231.07 rows=22 width=59) (actual \ntime=0.125..16747.395 rows=340 loops=1)\n -> Index Scan using authorid1 on book_author ba \n(cost=0.00..73.94 rows=20 width=8) (actual time=0.034..11.453 rows=99 loops=1)\n Index Cond: (authorid = 543)\n -> Index Scan using foreign_key_bookid on editions e \n(cost=0.00..107.76 rows=8 width=51) (actual time=90.741..169.031 rows=3 \nloops=99)\n Index Cond: (bookid = ba.bookid)\n Filter: mainname\n Total runtime: 16752.146 ms\n(11 Zeilen)\n\nAnd here after dump-restore:\n\n QUERY PLAN \n--------------------------------------------------------------------------------\n---------------------------------------------------------------------\n WindowAgg (cost=2325.78..2326.41 rows=23 width=58) (actual time=18.583..19.387 \nrows=340 loops=1)\n -> Sort (cost=2325.78..2325.84 rows=23 width=58) (actual \ntime=18.562..18.823 rows=340 loops=1)\n Sort Key: ba.bookid, (CASE WHEN (e.languageid = 123) THEN 1 WHEN \n(e.languageid = 401) THEN 2 WHEN (e.languageid = 150) THEN 3 ELSE 4 END)\n Sort Method: quicksort Memory: 60kB\n -> Nested Loop (cost=0.00..2325.26 rows=23 width=58) (actual \ntime=0.385..18.060 rows=340 loops=1)\n -> Index Scan using authorid1 on book_author ba \n(cost=0.00..73.29 rows=20 width=8) (actual time=0.045..0.541 rows=99 loops=1)\n Index Cond: (authorid = 543)\n -> Index Scan using foreign_key_bookid on editions e \n(cost=0.00..112.49 rows=9 width=50) (actual time=0.056..0.168 rows=3 loops=99)\n Index Cond: (bookid = ba.bookid)\n Filter: mainname\n Total runtime: 19.787 ms\n(11 Zeilen)\n\nserver settings:\nshared_buffers = 680MB\nwork_mem = 10MB\nmaintenance_work_mem = 64MB\ncheckpoint_segments = 32\ncheckpoint_completion_target = 0.9\neffective_cache_size = 1500MB\n\nNo matter how much I vacuum or analyse the slow db, I don't get it faster.\nI also checked for dead tuples - there are none.\n\n",
"msg_date": "Thu, 19 Jul 2012 11:33:28 +0000 (UTC)",
"msg_from": "Felix Scheicher <[email protected]>",
"msg_from_op": true,
"msg_subject": "queries are fast after dump->restore but slow again after some days\n\tdispite vacuum"
},
{
"msg_contents": "\nOn 07/19/2012 07:33 AM, Felix Scheicher wrote:\n> Hi, I am running a 9.1 server at Ubuntu. When I upgraded to the current version\n> I did a pg_dump followed by pg_restore and found that the db was much faster.\n> But slowed down again after two days. I did the dump-restore again and could now\n> compare the two (actually identical) databases. This is a query of the old one\n> directly after a VACUUM ANALYSE:\n...\n>\n> No matter how much I vacuum or analyse the slow db, I don't get it faster.\n> I also checked for dead tuples - there are none.\n\n\nTry running CLUSTER on the relevant tables and see if it makes a \ndifference. If it does you might want to look into using pg_reorg \nperiodically.\n\ncheers\n\nandrew\n",
"msg_date": "Thu, 19 Jul 2012 08:38:53 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries are fast after dump->restore but slow again\n\tafter some days dispite vacuum"
},
{
"msg_contents": "Andrew Dunstan <andrew <at> dunslane.net> writes:\n\n> Try running CLUSTER on the relevant tables and see if it makes a \n> difference. If it does you might want to look into using pg_reorg \n> periodically.\n\n\nThat worked like a charm! Many thanks. But how comes, the queries are also fast \nafter a restore without the cluster?\n\nregards,\nFelix\n\n",
"msg_date": "Thu, 19 Jul 2012 15:13:14 +0000 (UTC)",
"msg_from": "Felix Scheicher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: queries are fast after dump->restore but slow again after some\n\tdays dispite vacuum"
},
{
"msg_contents": "\nOn 07/19/2012 11:13 AM, Felix Scheicher wrote:\n> Andrew Dunstan <andrew <at> dunslane.net> writes:\n>\n>> Try running CLUSTER on the relevant tables and see if it makes a\n>> difference. If it does you might want to look into using pg_reorg\n>> periodically.\n>\n> That worked like a charm! Many thanks. But how comes, the queries are also fast\n> after a restore without the cluster?\n>\n\n\n\nThere is probably a lot of unused space in your table. CLUSTER rewrites \na fresh copy, as do restore and pg_reorg.\n\nYou might also want to try changing the settings on the table so it gets \nmuch more aggressively auto-vacuumed, so that dead space is made \navailable much more quickly, and the table has less chance to get bloated.\n\ncheers\n\nandrew\n",
"msg_date": "Thu, 19 Jul 2012 11:48:46 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries are fast after dump->restore but slow again\n\tafter some days dispite vacuum"
},
{
"msg_contents": "On Thu, Jul 19, 2012 at 8:13 AM, Felix Scheicher <[email protected]> wrote:\n> Andrew Dunstan <andrew <at> dunslane.net> writes:\n>\n>> Try running CLUSTER on the relevant tables and see if it makes a\n>> difference. If it does you might want to look into using pg_reorg\n>> periodically.\n>\n>\n> That worked like a charm! Many thanks. But how comes, the queries are also fast\n> after a restore without the cluster?\n\nProbably fewer buffers needed to be touched.\n\nRunning \"explain (analyze, buffers)\" would give information on how\nmany buffers were touched.\n\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 19 Jul 2012 09:12:08 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries are fast after dump->restore but slow again\n\tafter some days dispite vacuum"
},
{
"msg_contents": "Are you running a lot of full table updates?\n\nOn Thu, Jul 19, 2012 at 9:13 AM, Felix Scheicher <[email protected]> wrote:\n> Andrew Dunstan <andrew <at> dunslane.net> writes:\n>\n>> Try running CLUSTER on the relevant tables and see if it makes a\n>> difference. If it does you might want to look into using pg_reorg\n>> periodically.\n>\n>\n> That worked like a charm! Many thanks. But how comes, the queries are also fast\n> after a restore without the cluster?\n>\n> regards,\n> Felix\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Thu, 19 Jul 2012 11:04:41 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries are fast after dump->restore but slow again\n\tafter some days dispite vacuum"
},
{
"msg_contents": ">>> Try running CLUSTER on the relevant tables and see if it makes a\n>>> difference. If it does you might want to look into using pg_reorg\n>>> periodically.\n>>\n>>\n>> That worked like a charm! Many thanks. But how comes, the queries are also fast\n>> after a restore without the cluster?\n>>\n2012/7/19 Scott Marlowe <[email protected]>:\n> Are you running a lot of full table updates?\n\nIf you mean updates which are applied on every or almost every row of\nthe table - yes, it happens with two rather small tables of max. 10\n000 rows. But they are both not touched by the query with this big\nperformance difference.\n\nRegards,\nFelix\n",
"msg_date": "Thu, 19 Jul 2012 19:24:59 +0200",
"msg_from": "mandavi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries are fast after dump->restore but slow again\n\tafter some days dispite vacuum"
},
{
"msg_contents": ">> Are you running a lot of full table updates?\n> If you mean updates which are applied on every or almost every row of\n> the table - yes, it happens with two rather small tables of max. 10\n> 000 rows. But they are both not touched by the query with this big\n> performance difference.\nI'm not an expert, but would it help to change fillfactor to about 45%? \nI'm just guessing that full table updates with fillfactor=45% could \nstore the rows on the same page. Maybe I'm wrong.\n",
"msg_date": "Fri, 20 Jul 2012 09:10:23 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries are fast after dump->restore but slow again\n\tafter some days dispite vacuum"
}
] |
[
{
"msg_contents": "Hi all, \n\n\nWe have put some deferred constraints (some initially immediate, some\ninitially deferred) into our database for testing with our applications. \n\nI haven't seen any noticeable loss in performance, but I am not sure I can\nproperly simulate our full production environment load levels in my tests.\nI was wondering if I am turning on something that has known significant\nnegative impacts to performance, just from having them there. \n\nI understand a lot more may have to be tracked through a transaction and\nthere could be some impact from that. Similar to an after update trigger? Or\nare the two not comparable in terms of impact from what is tracked and then\nchecked. \n\n\nAnyways, just looking for feedback if anyone has any. \n\n\n-Mark\n\n",
"msg_date": "Thu, 19 Jul 2012 20:27:38 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Deferred constraints performance impact ? "
},
{
"msg_contents": "On Thu, 2012-07-19 at 20:27 -0600, mark wrote:\n> I understand a lot more may have to be tracked through a transaction and\n> there could be some impact from that. Similar to an after update trigger? Or\n> are the two not comparable in terms of impact from what is tracked and then\n> checked. \n\nThey should be very comparable to AFTER triggers. It's actually a little\nbetter because there are optimizations to avoid queuing constraint\nchecks if we know it will pass.\n\nI would recommend testing a few degenerate cases to see how big the\nimpact is, and try to see if that is reasonable for your application.\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Sun, 12 Aug 2012 15:42:10 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deferred constraints performance impact ?"
},
{
"msg_contents": "On Fri, Jul 20, 2012 at 4:27 AM, mark <[email protected]> wrote:\n> We have put some deferred constraints (some initially immediate, some\n> initially deferred) into our database for testing with our applications.\n\n> I understand a lot more may have to be tracked through a transaction and\n> there could be some impact from that. Similar to an after update trigger? Or\n> are the two not comparable in terms of impact from what is tracked and then\n> checked.\n\nAnother factor might be the amount of constraint violations you\nexpect: if there are many then deferring the check can create much\nmore work for the DB because you issue more DML as with a non deferred\nconstraint which could create errors much sooner and hence make you\nstop sending DML earlier.\n\nKind regards\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n\n",
"msg_date": "Mon, 13 Aug 2012 10:33:24 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deferred constraints performance impact ?"
}
] |
[
{
"msg_contents": "(First, apologies if this post now appears twice - it appears our mail server rewrites my address!)\n\nHello all. I'm a pgsql performance virgin so hope I cross all the 't's\nand dot the lower-case 'j's when posting this query...\n\nOn our production database server we're experiencing behaviour that\nseveral engineers are unable to explain - hence this Email. First, our\nspecs;\n\nScientific Linux 6.2, kernel 2.6.32\nPG version 9.1.3, release 1PGDG.rhel6\n24GB RAM\n8 cores\n2x software SSD-based RAIDs:\n a) ~660GB, RAID 5, 4 SSDs (data)\n b) ~160GB, RAID 1, 2 SSDs (xlogs + tmp tables)\n\nWe're seeing SELECT statements and even EXPLAIN (no ANAYLZE) \nstatements hang indefinitely until *something* (we don't know what)\nreleases some kind of resource or no longer becomes a massive bottle\nneck. These are the symptoms.\n\nHowever, the system seems healthy - no table ('heavyweight') locks are\nheld by any session (this happens with only a few connected sessions),\nall indexes are used correctly, other transactions are writing data (we\ngenerally only have a few sessions running at a time - perhaps 10) etc.\netc. In fact, we are writing (or bgwriter is), 2-3 hundred MB/s\nsometimes.\n\nWe regularly run vacuum analyze at quiet periods - generally 1-2s daily.\n\nThese sessions (that only read data) that are blocked can block from\nanything from between only 5 minutes to 10s of hours then miraculously\ncomplete successfully at once.\n\nAny suggestions for my next avenue of investigation? I'll try and\ncapture more data by observation next time it happens (it is relatively\nintermittent).\n\nRegards,\n\nJim\n\nPS. These are the settings that differ from the default:\n\ncheckpoint_segments = 128\nmaintenance_work_mem = 256MB\nsynchronous_commit = off\nrandom_page_cost = 3.0\nwal_buffers = 16MB\nshared_buffers = 8192MB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 18432MB\nwork_mem = 32MB\neffective_io_concurrency = 12\nmax_stack_depth = 8MB\nlog_autovacuum_min_duration = 0\nlog_lock_waits = on\nautovacuum_vacuum_scale_factor = 0.1\nautovacuum_naptime = 8\nautovacuum_max_workers = 4\n\nPPS. I've just noticed that our memory configuration is over subscribed!\n shared_buffers + effective_cache_size > Total available RAM! Could \n this be the root cause somehow?\n\n-- \nJim Vanns\nSystems Programmer\nFramestore\n\n",
"msg_date": "Mon, 23 Jul 2012 09:41:45 +0100",
"msg_from": "Jim Vanns <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd blocking (or massively latent) issue - even with EXPLAIN"
},
{
"msg_contents": "Hi> > We're seeing SELECT statements and even EXPLAIN (no ANAYLZE) > statements hang indefinitely until *something* (we don't know what)> releases some kind of resource or no longer becomes a massive bottle> neck. These are the symptoms.Is this in pgAdmin? Or psql on the console?> However, the system seems healthy - no table ('heavyweight') locks are> held by any session (this happens with only a few connected sessions),> all indexes are used correctly, other transactions are writing data (we> generally only have a few sessions running at a time - perhaps 10) etc.> etc. In fact, we are writing (or bgwriter is), 2-3 hundred MB/s> sometimes.What is shown in \"top\" and \"iostat\" whilst the queries are running?> > We regularly run vacuum analyze at quiet periods - generally 1-2s daily.> > These sessions (that only read data) that are blocked can block from> anything from between only 5 minutes to 10s of hours then miraculously> complete successfully at once.> Are any \"blockers\" shown in pg_stat_activity?> > checkpoint_segments = 128> maintenance_work_mem = 256MB> synchronous_commit = off> random_page_cost = 3.0> wal_buffers = 16MB> shared_buffers = 8192MB> checkpoint_completion_target = 0.9> effective_cache_size = 18432MB> work_mem = 32MB> effective_io_concurrency = 12> max_stack_depth = 8MB> log_autovacuum_min_duration = 0> log_lock_waits = on> autovacuum_vacuum_scale_factor = 0.1> autovacuum_naptime = 8> autovacuum_max_workers = 4Memory looks reasonably configured to me. effective_cache_size is only an indication to the planner and is not actually allocated. Is anything being written to the logfiles?Cheers=============================================\n\nRomax Technology Limited\nRutherford House\nNottingham Science & Technology Park\nNottingham, \nNG7 2PZ\nEngland\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that is confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf of the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation.\n=================================================\n\n",
"msg_date": "Mon, 23 Jul 2012 14:46:28 +0100",
"msg_from": "\"Martin French\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with EXPLAIN"
},
{
"msg_contents": "\nOn 07/23/2012 04:41 AM, Jim Vanns wrote:\n> We're seeing SELECT statements and even EXPLAIN (no ANAYLZE)\n> statements hang indefinitely until *something* (we don't know what)\n> releases some kind of resource or no longer becomes a massive bottle\n> neck. These are the symptoms.\n\n\nI have seen this sort of behaviour on systems with massive catalogs \n(millions of tables and indexes). Could that be your problem?\n\n\ncheers\n\nandrew\n",
"msg_date": "Mon, 23 Jul 2012 09:53:25 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with\n EXPLAIN"
},
{
"msg_contents": "Thank you all for your replies, I shall try and qualify and confirm...\n\nOn Mon, 2012-07-23 at 14:46 +0100, Martin French wrote:\n> Hi\n> \n> > \n> > We're seeing SELECT statements and even EXPLAIN (no ANAYLZE) \n> > statements hang indefinitely until *something* (we don't know what)\n> > releases some kind of resource or no longer becomes a massive bottle\n> > neck. These are the symptoms.\n> \n> Is this in pgAdmin? Or psql on the console?\n> \npsql\n\n> > However, the system seems healthy - no table ('heavyweight') locks\n> are\n> > held by any session (this happens with only a few connected\n> sessions),\n> > all indexes are used correctly, other transactions are writing data\n> (we\n> > generally only have a few sessions running at a time - perhaps 10)\n> etc.\n> > etc. In fact, we are writing (or bgwriter is), 2-3 hundred MB/s\n> > sometimes.\n> \n> What is shown in \"top\" and \"iostat\" whilst the queries are running?\n\nGenerally, lots of CPU churn (90-100%) and a fair bit of I/O wait.\niostat reports massive reads (up to 300MB/s).\n\n> > \n> > We regularly run vacuum analyze at quiet periods - generally 1-2s\n> daily.\n\n(this is to answer to someone who didn't reply to the list)\n\nWe run full scans using vacuumdb so don't just rely on autovacuum. The\nsmall table is so small (<50 tuples) a sequence scan is always\nperformed.\n\n> > These sessions (that only read data) that are blocked can block from\n> > anything from between only 5 minutes to 10s of hours then\n> miraculously\n> > complete successfully at once.\n> > \n> \n> Are any \"blockers\" shown in pg_stat_activity?\n\nNone. Ever. Nothing in pg_locks either.\n\n> > \n> > checkpoint_segments = 128\n> > maintenance_work_mem = 256MB\n> > synchronous_commit = off\n> > random_page_cost = 3.0\n> > wal_buffers = 16MB\n> > shared_buffers = 8192MB\n> > checkpoint_completion_target = 0.9\n> > effective_cache_size = 18432MB\n> > work_mem = 32MB\n> > effective_io_concurrency = 12\n> > max_stack_depth = 8MB\n> > log_autovacuum_min_duration = 0\n> > log_lock_waits = on\n> > autovacuum_vacuum_scale_factor = 0.1\n> > autovacuum_naptime = 8\n> > autovacuum_max_workers = 4\n> \n> Memory looks reasonably configured to me. effective_cache_size is only\n> an indication to the planner and is not actually allocated. \n\nI realise that.\n\n> Is anything being written to the logfiles?\n\nNothing obvious - and we log a fair amount. No tmp table creations,\nno locks held. \n\nTo add to this EXPLAIN reports it took only 0.23ms to run (for example)\nwhereas the wall clock time is more like 20-30 minutes (or up to n hours\nas I said where everything appears to click back into place at the same\ntime).\n\nThanks.\n\nJim\n\n> Cheers============================================= Romax Technology\n> Limited Rutherford House Nottingham Science & Technology Park\n> Nottingham, NG7 2PZ England Telephone numbers: +44 (0)115 951 88 00\n> (main) For other office locations see:\n> http://www.romaxtech.com/Contact =================================\n> =============== E-mail: [email protected] Website: www.romaxtech.com\n> ================================= ================ Confidentiality\n> Statement This transmission is for the addressee only and contains\n> information that is confidential and privileged. Unless you are the\n> named addressee, or authorised to receive it on behalf of the\n> addressee you may not copy or use it, or disclose it to anyone else.\n> If you have received this transmission in error please delete from\n> your system and contact the sender. Thank you for your cooperation.\n> =================================================\n> \n\n-- \nJim Vanns\nSystems Programmer\nFramestore\n\n",
"msg_date": "Mon, 23 Jul 2012 15:46:03 +0100",
"msg_from": "Jim Vanns <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with\n EXPLAIN"
},
{
"msg_contents": "On Mon, 2012-07-23 at 09:53 -0400, Andrew Dunstan wrote:\n> On 07/23/2012 04:41 AM, Jim Vanns wrote:\n> > We're seeing SELECT statements and even EXPLAIN (no ANAYLZE)\n> > statements hang indefinitely until *something* (we don't know what)\n> > releases some kind of resource or no longer becomes a massive bottle\n> > neck. These are the symptoms.\n> \n> I have seen this sort of behaviour on systems with massive catalogs \n> (millions of tables and indexes). Could that be your problem?\n\nPossibly. I'm not familiar with the catalogs. I'll look into that.\n\nThanks,\n\nJim\n\n> \n> cheers\n> \n> andrew\n> \n\n-- \nJim Vanns\nSystems Programmer\nFramestore\n\n",
"msg_date": "Mon, 23 Jul 2012 15:47:22 +0100",
"msg_from": "Jim Vanns <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with\n EXPLAIN"
},
{
"msg_contents": "Jim Vanns <[email protected]> writes:\n> We're seeing SELECT statements and even EXPLAIN (no ANAYLZE) \n> statements hang indefinitely until *something* (we don't know what)\n> releases some kind of resource or no longer becomes a massive bottle\n> neck. These are the symptoms.\n\nDoes anything show up as blocked in the pg_locks view?\n\nCould you attach to the stuck process with gdb and get a stack trace?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Jul 2012 11:09:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with EXPLAIN"
},
{
"msg_contents": "On Mon, 2012-07-23 at 11:09 -0400, Tom Lane wrote:\n> Jim Vanns <[email protected]> writes:\n> > We're seeing SELECT statements and even EXPLAIN (no ANAYLZE) \n> > statements hang indefinitely until *something* (we don't know what)\n> > releases some kind of resource or no longer becomes a massive bottle\n> > neck. These are the symptoms.\n> \n> Does anything show up as blocked in the pg_locks view?\n\nNope.\n\n> Could you attach to the stuck process with gdb and get a stack trace?\n\nHaven't been quite brave enough to do that yet - this is a production\nserver. I did manage to strace a process though - it (the server side\nprocess of a psql EXPLAIN) appeared to spin on an awful lot of semop()\ncalls with the occasional read(). Of course, in the context of a shared\nmemory system such as postgres I'd expect to see quite a lot of semop()\ncalls but I've no idea how much is normal and how much is excessive.\n\nJim\n\n> \t\t\tregards, tom lane\n> \n\n-- \nJim Vanns\nSystems Programmer\nFramestore\n\n",
"msg_date": "Mon, 23 Jul 2012 16:49:25 +0100",
"msg_from": "Jim Vanns <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with\n EXPLAIN"
},
{
"msg_contents": "On 07/23/2012 10:46 PM, Jim Vanns wrote:\n> Nothing obvious - and we log a fair amount. No tmp table creations,\n> no locks held.\n>\n> To add to this EXPLAIN reports it took only 0.23ms to run (for example)\n> whereas the wall clock time is more like 20-30 minutes (or up to n hours\n> as I said where everything appears to click back into place at the same\n> time).\nHow many concurrent connections do you have?\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 24 Jul 2012 08:30:22 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with\n EXPLAIN"
},
{
"msg_contents": "> > Hi> > > > > > > > We're seeing SELECT statements and even EXPLAIN (no ANAYLZE) > > > statements hang indefinitely until *something* (we don't know what)> > > releases some kind of resource or no longer becomes a massive bottle> > > neck. These are the symptoms.> > > > Is this in pgAdmin? Or psql on the console?> > > psql> > > > However, the system seems healthy - no table ('heavyweight') locks> > are> > > held by any session (this happens with only a few connected> > sessions),> > > all indexes are used correctly, other transactions are writing data> > (we> > > generally only have a few sessions running at a time - perhaps 10)> > etc.> > > etc. In fact, we are writing (or bgwriter is), 2-3 hundred MB/s> > > sometimes.> > > > What is shown in \"top\" and \"iostat\" whilst the queries are running?> > Generally, lots of CPU churn (90-100%) and a fair bit of I/O wait.> iostat reports massive reads (up to 300MB/s).This looks like this is a pure IO issue. You mentioned that this was a software RAID system. I wonder if there's some complication there.Have you tried setting the disk queues to deadline?echo \"deadline\" > /sys/block/{DEVICE-NAME}/queue/schedulerThat might help. But to be honest, it really does sound disk/software raid related with the CPU and IO being so high.Can you attempt to replicate the problem on another system without software RAID?Also, you might want to try a disk test on the machine, it's 24GB ram right?so, try the following tests on the Postgres data disk (you'll obviously need lots of space for this):Write Test: time sh -c \"dd if=/dev/zero of=bigfile bs=8k count=6000000 && sync\"Read Test: time dd if=bigfile of=/dev/null bs=8k( Tests taken from Greg Smiths page: http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm )> > > > > > > We regularly run vacuum analyze at quiet periods - generally 1-2s> > daily.> > (this is to answer to someone who didn't reply to the list)> > We run full scans using vacuumdb so don't just rely on autovacuum. The> small table is so small (<50 tuples) a sequence scan is always> performed.> > > > These sessions (that only read data) that are blocked can block from> > > anything from between only 5 minutes to 10s of hours then> > miraculously> > > complete successfully at once.> > > > > > > Are any \"blockers\" shown in pg_stat_activity?> > None. Ever. Nothing in pg_locks either.> > > > > > > checkpoint_segments = 128> > > maintenance_work_mem = 256MB> > > synchronous_commit = off> > > random_page_cost = 3.0> > > wal_buffers = 16MB> > > shared_buffers = 8192MB> > > checkpoint_completion_target = 0.9> > > effective_cache_size = 18432MB> > > work_mem = 32MB> > > effective_io_concurrency = 12> > > max_stack_depth = 8MB> > > log_autovacuum_min_duration = 0> > > log_lock_waits = on> > > autovacuum_vacuum_scale_factor = 0.1> > > autovacuum_naptime = 8> > > autovacuum_max_workers = 4> > > > Memory looks reasonably configured to me. effective_cache_size is only> > an indication to the planner and is not actually allocated. > > I realise that.> > > Is anything being written to the logfiles?> > Nothing obvious - and we log a fair amount. No tmp table creations,> no locks held. > > To add to this EXPLAIN reports it took only 0.23ms to run (for example)> whereas the wall clock time is more like 20-30 minutes (or up to n hours> as I said where everything appears to click back into place at the same> time).> > Thanks.> Something else you might want to try is running with a default Postgresql.conf, if the query/explain then runs fine, then that would lead me to believe that there is a configuration issue. Although I'm pretty convinced that it may be the disk set up. Cheers=============================================\n\nRomax Technology Limited\nRutherford House\nNottingham Science & Technology Park\nNottingham, \nNG7 2PZ\nEngland\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that is confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf of the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation.\n=================================================\n\n",
"msg_date": "Tue, 24 Jul 2012 07:50:45 +0100",
"msg_from": "\"Martin French\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with EXPLAIN"
},
{
"msg_contents": "On Tue, 2012-07-24 at 08:30 +0800, Craig Ringer wrote:\n> On 07/23/2012 10:46 PM, Jim Vanns wrote:\n> > Nothing obvious - and we log a fair amount. No tmp table creations,\n> > no locks held.\n> >\n> > To add to this EXPLAIN reports it took only 0.23ms to run (for example)\n> > whereas the wall clock time is more like 20-30 minutes (or up to n hours\n> > as I said where everything appears to click back into place at the same\n> > time).\n> How many concurrent connections do you have?\n\nBetween 4 and 64 at peak! max_connections is only set to 100.\n\nJim\n\n> --\n> Craig Ringer\n\n-- \nJim Vanns\nSystems Programmer\nFramestore\n\n",
"msg_date": "Tue, 24 Jul 2012 09:37:12 +0100",
"msg_from": "Jim Vanns <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with\n EXPLAIN"
},
{
"msg_contents": "> This looks like this is a pure IO issue. You mentioned that this was a\n> software RAID system. I wonder if there's some complication there.\n> \n> Have you tried setting the disk queues to deadline?\n> \n> echo \"deadline\" > /sys/block/{DEVICE-NAME}/queue/scheduler\n> That might help. But to be honest, it really does sound disk/software\n> raid related with the CPU and IO being so high.\n> \n> Can you attempt to replicate the problem on another system without\n> software RAID?\n> \n> Also, you might want to try a disk test on the machine, it's 24GB ram\n> right?\n> \n> so, try the following tests on the Postgres data disk (you'll\n> obviously need lots of space for this):\n> \n> Write Test: \n> time sh -c \"dd if=/dev/zero of=bigfile bs=8k count=6000000 && sync\"\n> \n> Read Test:\n> time dd if=bigfile of=/dev/null bs=8k\n\nI've already tried something very similar using dd. No performance\npenalties during a normal running of the system - or when this blocking\nhappens either actually. But I agree, it does indeed sound like some\nsort of I/O problem. I just don't know what! I do have a few more tricks\nup my sleeve that I'll try today. I'll post any results that I have.\n\nThat latter test - won't that pretty much just read from the page cache?\n'sync' may well have forced dirty pages to disk but does it actually\nevict them to?\n\nAnyway, that is off topic... perhaps ;)\n\nThanks again,\n\nJim\n\n> ( Tests taken from Greg Smiths page:\n> http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm )\n> \n> > \n> > > > \n> > > > We regularly run vacuum analyze at quiet periods - generally\n> 1-2s\n> > > daily.\n> > \n> > (this is to answer to someone who didn't reply to the list)\n> > \n> > We run full scans using vacuumdb so don't just rely on autovacuum.\n> The\n> > small table is so small (<50 tuples) a sequence scan is always\n> > performed.\n> > \n> > > > These sessions (that only read data) that are blocked can block\n> from\n> > > > anything from between only 5 minutes to 10s of hours then\n> > > miraculously\n> > > > complete successfully at once.\n> > > > \n> > > \n> > > Are any \"blockers\" shown in pg_stat_activity?\n> > \n> > None. Ever. Nothing in pg_locks either.\n> > \n> > > > \n> > > > checkpoint_segments = 128\n> > > > maintenance_work_mem = 256MB\n> > > > synchronous_commit = off\n> > > > random_page_cost = 3.0\n> > > > wal_buffers = 16MB\n> > > > shared_buffers = 8192MB\n> > > > checkpoint_completion_target = 0.9\n> > > > effective_cache_size = 18432MB\n> > > > work_mem = 32MB\n> > > > effective_io_concurrency = 12\n> > > > max_stack_depth = 8MB\n> > > > log_autovacuum_min_duration = 0\n> > > > log_lock_waits = on\n> > > > autovacuum_vacuum_scale_factor = 0.1\n> > > > autovacuum_naptime = 8\n> > > > autovacuum_max_workers = 4\n> > > \n> > > Memory looks reasonably configured to me. effective_cache_size is\n> only\n> > > an indication to the planner and is not actually allocated. \n> > \n> > I realise that.\n> > \n> > > Is anything being written to the logfiles?\n> > \n> > Nothing obvious - and we log a fair amount. No tmp table creations,\n> > no locks held. \n> > \n> > To add to this EXPLAIN reports it took only 0.23ms to run (for\n> example)\n> > whereas the wall clock time is more like 20-30 minutes (or up to n\n> hours\n> > as I said where everything appears to click back into place at the\n> same\n> > time).\n> > \n> > Thanks.\n> > \n> \n> Something else you might want to try is running with a default\n> Postgresql.conf, if the query/explain then runs fine, then that would\n> lead me to believe that there is a configuration issue. Although I'm\n> pretty convinced that it may be the disk set up. \n> \n> Cheers\n> ============================================= Romax Technology Limited\n> Rutherford House Nottingham Science & Technology Park Nottingham, NG7\n> 2PZ England Telephone numbers: +44 (0)115 951 88 00 (main) For other\n> office locations see: http://www.romaxtech.com/Contact\n> ================================= =============== E-mail:\n> [email protected] Website: www.romaxtech.com\n> ================================= ================ Confidentiality\n> Statement This transmission is for the addressee only and contains\n> information that is confidential and privileged. Unless you are the\n> named addressee, or authorised to receive it on behalf of the\n> addressee you may not copy or use it, or disclose it to anyone else.\n> If you have received this transmission in error please delete from\n> your system and contact the sender. Thank you for your cooperation.\n> =================================================\n> \n\n-- \nJim Vanns\nSystems Programmer\nFramestore\n\n",
"msg_date": "Tue, 24 Jul 2012 09:48:11 +0100",
"msg_from": "Jim Vanns <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with\n EXPLAIN"
},
{
"msg_contents": "Hi Jim,> > I've already tried something very similar using dd. No performance> penalties during a normal running of the system - or when this blocking> happens either actually. But I agree, it does indeed sound like some> sort of I/O problem. I just don't know what! I do have a few more tricks> up my sleeve that I'll try today. I'll post any results that I have.Ok, let me know how you get on. > > That latter test - won't that pretty much just read from the page cache?> 'sync' may well have forced dirty pages to disk but does it actually> evict them to?Basically, the cache is avoided because of the size of the file. 6000000 blocks at 8k exceeds the size of RAM in the machine, so it *should* miss the cache and hit the disk directly. :)> > Anyway, that is off topic... perhaps ;)> > Thanks again,> > Jim> CheersMartin =============================================\n\nRomax Technology Limited\nRutherford House\nNottingham Science & Technology Park\nNottingham, \nNG7 2PZ\nEngland\n\nTelephone numbers:\n+44 (0)115 951 88 00 (main)\n\nFor other office locations see:\nhttp://www.romaxtech.com/Contact\n=================================\n===============\nE-mail: [email protected]\nWebsite: www.romaxtech.com\n=================================\n\n================\nConfidentiality Statement\nThis transmission is for the addressee only and contains information that is confidential and privileged.\nUnless you are the named addressee, or authorised to receive it on behalf of the addressee \nyou may not copy or use it, or disclose it to anyone else. \nIf you have received this transmission in error please delete from your system and contact the sender. Thank you for your cooperation.\n=================================================\n\n",
"msg_date": "Tue, 24 Jul 2012 10:18:53 +0100",
"msg_from": "\"Martin French\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with EXPLAIN"
},
{
"msg_contents": "> > That latter test - won't that pretty much just read from the page\n> cache?\n> > 'sync' may well have forced dirty pages to disk but does it actually\n> > evict them to?\n> \n> Basically, the cache is avoided because of the size of the file.\n> 6000000 blocks at 8k exceeds the size of RAM in the machine, so it\n> *should* miss the cache and hit the disk directly. :)\n\nDoh!\n\n> > \n> > Anyway, that is off topic... perhaps ;)\n> > \n> > Thanks again,\n> > \n> > Jim\n> > \n> \n> Cheers\n> \n> Martin ============================================= Romax Technology\n> Limited Rutherford House Nottingham Science & Technology Park\n> Nottingham, NG7 2PZ England Telephone numbers: +44 (0)115 951 88 00\n> (main) For other office locations see:\n> http://www.romaxtech.com/Contact =================================\n> =============== E-mail: [email protected] Website: www.romaxtech.com\n> ================================= ================ Confidentiality\n> Statement This transmission is for the addressee only and contains\n> information that is confidential and privileged. Unless you are the\n> named addressee, or authorised to receive it on behalf of the\n> addressee you may not copy or use it, or disclose it to anyone else.\n> If you have received this transmission in error please delete from\n> your system and contact the sender. Thank you for your cooperation.\n> =================================================\n> \n\n-- \nJim Vanns\nSystems Programmer\nFramestore\n\n",
"msg_date": "Tue, 24 Jul 2012 10:34:11 +0100",
"msg_from": "Jim Vanns <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with\n EXPLAIN"
},
{
"msg_contents": "Hello again. So sorry for resurrecting such an old thread but the\nproblem still persists - I've just had very little to report on, until\nnow...\n\n> > That latter test - won't that pretty much just read from the page\n> cache?\n> > 'sync' may well have forced dirty pages to disk but does it actually\n> > evict them to?\n> \n> Basically, the cache is avoided because of the size of the file.\n> 6000000 blocks at 8k exceeds the size of RAM in the machine, so it\n> *should* miss the cache and hit the disk directly. :)\n\nOK, I did this during the problematic time and write speeds (sustained)\nare in the order of 250MB/s :) It took just 3m21s to write all ~50GB. We\nget read speeds of a wonderfully massive ~530MB/s - the whole file read\nin just 1m30s. All metrics gathered with iostat -p <devices> -m 1.\n\nNow, what I have noticed is this; we run two databases on this one\nmachine. One (DB) is completely operable in a normal way during the slow\nperiod and the other is not. This (unresponsive) database has just two\nclient processes connected - one is writing, the other is (meant to be)\nreading.\n\nNeither registers in pg_locks - one does not block the other at the DB\nlevel. However the write process (INSERTs) is writing between 5 and 10\nMB/s. The read process (a SELECT or EXPLAIN) just spins the CPU at 100%\nand register 0.0 MB/s - yet it should be reading *a lot* of data.\n\nSo does PostgreSQL somehow (or have I misconfigured it to) always\nprioritise writes over reads?\n\nI'm still at a loss!\n\nAny further pointers would be appreciated.\n\nJim\n\nPS. This is with the deadline scheduler and with each block device set\nwith --setra 8192.\n\n> > Anyway, that is off topic... perhaps ;)\n> > \n> > Thanks again,\n> > \n> > Jim\n> > \n> \n> Cheers\n> \n> Martin ============================================= Romax Technology\n> Limited Rutherford House Nottingham Science & Technology Park\n> Nottingham, NG7 2PZ England Telephone numbers: +44 (0)115 951 88 00\n> (main) For other office locations see:\n> http://www.romaxtech.com/Contact =================================\n> =============== E-mail: [email protected] Website: www.romaxtech.com\n> ================================= ================ Confidentiality\n> Statement This transmission is for the addressee only and contains\n> information that is confidential and privileged. Unless you are the\n> named addressee, or authorised to receive it on behalf of the\n> addressee you may not copy or use it, or disclose it to anyone else.\n> If you have received this transmission in error please delete from\n> your system and contact the sender. Thank you for your cooperation.\n> =================================================\n> \n\n-- \nJim Vanns\nSystems Programmer\nFramestore\n\n\n",
"msg_date": "Thu, 16 Aug 2012 17:14:36 +0100",
"msg_from": "Jim Vanns <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd blocking (or massively latent) issue - even with\n EXPLAIN"
}
] |
[
{
"msg_contents": "Hello,\n\nWe are using Postgres 9.1.4. We are struggling with a class of queries\nthat got impossible to run after sharding a large table. Everything\nlike:\n\n select small.something, big.anything\n from small join big on small.big_id = big.id;\n\nand variation such as \"select * from big where id in (select big_id from small)\"\n\nSince \"big\" was sharded, the query plan results in something like:\n\n Hash Join (cost=10000000001.23..30038997974.72 rows=10 width=753)\n Hash Cond: (b.id = i.big_id)\n -> Append (cost=0.00..20038552251.23 rows=118859245 width=11)\n -> Index Scan using big_201207_pkey on big_201207 b\n(cost=0.00..2224100.46 rows=1609634 width=12)\n -> Index Scan using big_201101_pkey on big_201101 b\n(cost=0.00..404899.71 rows=5437497 width=12)\n -> Index Scan using big_201104_pkey on big_201104 b\n(cost=0.00..349657.58 rows=4625181 width=12)\n -> [...all the shards]\n -> Hash (cost=10000000001.10..10000000001.10 rows=10 width=742)\n -> Seq Scan on small i (cost=10000000000.00..10000000001.10\nrows=10 width=742)\n\nPostgres ends up in never-ending reads: even if \"small\" has only three\nrows I've never seen such query finishing, the time passed being even\nlonger than a full scan on big.\n\nThe plan looks sub-optimal, as it seems it first does a huge indexscan\nof all the partitions, then it joins the result against a small hash.\n\n1. Can we fix the queries to work around this problem?\n\n2. Could the planner be fixed for this scenario for PG 9.2 (or 9.3)?\nCreating the hash beforehand, performing an hash join for each\npartition and merging the results looks like it would bring it back\ninto the realm of the runnable queries. Am I wrong?\n\nThank you very much.\n\n-- Daniele\n",
"msg_date": "Mon, 23 Jul 2012 11:03:11 +0100",
"msg_from": "Daniele Varrazzo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Shards + hash = forever running queries"
},
{
"msg_contents": "On Mon, Jul 23, 2012 at 11:03 AM, Daniele Varrazzo\n<[email protected]> wrote:\n\n> 1. Can we fix the queries to work around this problem?\n\nAs a stop-gap measure I've defined a get_big(id) function and using it\nto pull in the details we're interested into from the \"big\" table:\n\n create function get_big (id int) returns big as $$\n select * from big where id = $1;\n $$ language sql stable strict;\n\nI'm not completely satisfied by it though: if there's any better\nsolution I'd be happy to know.\n\nThank you,\n\n-- Daniele\n",
"msg_date": "Mon, 23 Jul 2012 12:43:38 +0100",
"msg_from": "Daniele Varrazzo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shards + hash = forever running queries"
},
{
"msg_contents": "Daniele Varrazzo <[email protected]> writes:\n> Since \"big\" was sharded, the query plan results in something like:\n\n> Hash Join (cost=10000000001.23..30038997974.72 rows=10 width=753)\n> Hash Cond: (b.id = i.big_id)\n> -> Append (cost=0.00..20038552251.23 rows=118859245 width=11)\n> -> Index Scan using big_201207_pkey on big_201207 b\n> (cost=0.00..2224100.46 rows=1609634 width=12)\n> -> Index Scan using big_201101_pkey on big_201101 b\n> (cost=0.00..404899.71 rows=5437497 width=12)\n> -> Index Scan using big_201104_pkey on big_201104 b\n> (cost=0.00..349657.58 rows=4625181 width=12)\n> -> [...all the shards]\n> -> Hash (cost=10000000001.10..10000000001.10 rows=10 width=742)\n> -> Seq Scan on small i (cost=10000000000.00..10000000001.10\n> rows=10 width=742)\n\n[ squint... ] 9.1 certainly ought to be able to find a smarter plan for\nsuch a case. For instance, if I try this on 9.1 branch tip:\n\nregression=# create table p (id int primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"p_pkey\" for table \"p\"\nCREATE TABLE\nregression=# create table c1 (primary key (id)) inherits(p);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"c1_pkey\" for table \"c1\"\nCREATE TABLE\nregression=# create table c2 (primary key (id)) inherits(p);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"c2_pkey\" for table \"c2\"\nCREATE TABLE\nregression=# explain select * from p,int4_tbl where id=f1;\n QUERY PLAN \n--------------------------------------------------------------------------------\n Nested Loop (cost=0.00..53.25 rows=120 width=8)\n Join Filter: (public.p.id = int4_tbl.f1)\n -> Seq Scan on int4_tbl (cost=0.00..1.05 rows=5 width=4)\n -> Append (cost=0.00..10.40 rows=3 width=4)\n -> Index Scan using p_pkey on p (cost=0.00..1.87 rows=1 width=4)\n Index Cond: (id = int4_tbl.f1)\n -> Index Scan using c1_pkey on c1 p (cost=0.00..4.27 rows=1 width=4)\n Index Cond: (id = int4_tbl.f1)\n -> Index Scan using c2_pkey on c2 p (cost=0.00..4.27 rows=1 width=4)\n Index Cond: (id = int4_tbl.f1)\n(10 rows)\n\nYou have evidently got enable_seqscan turned off, so I wonder whether\nthe cost penalties applied by that are swamping the estimates. Do you\nget any better results if you re-enable that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Jul 2012 11:07:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Shards + hash = forever running queries"
},
{
"msg_contents": "On Mon, Jul 23, 2012 at 4:07 PM, Tom Lane <[email protected]> wrote:\n> Daniele Varrazzo <[email protected]> writes:\n>> Since \"big\" was sharded, the query plan results in something like:\n>> [ugly]\n>\n> [ squint... ] 9.1 certainly ought to be able to find a smarter plan for\n> such a case. For instance, if I try this on 9.1 branch tip:\n> [good]\n>\n> You have evidently got enable_seqscan turned off, so I wonder whether\n> the cost penalties applied by that are swamping the estimates. Do you\n> get any better results if you re-enable that?\n\nHello Tom, thank you for your help.\n\nActually, I don't know what to say. seqscan were most likely enabled\nwhen the problem showed up. I may have disabled it for testing in my\nsession and the plan I've copied may have been generated with disabled\nseqscan, but the original problem (that query never completing) was\nreported in different sessions by different people, so the only\npossibility was that seqscans were disabled in the config file...\nwhich I have been confirmed was not the case. I hadn't tested in my\nsession whether they were disabled before explicitly disabling them\nfor testing.\n\nMatter of fact, after reading your reply, I've tested the query\nagain... and it was fast, the plan being the nested loop of your\nexample. :-\\ What can I say, thank you for your radiation...\n\nI've tried reverting other schema changes we performed yesterday but\nI've not been able to reproduce the original slowness. In case we find\nsomething that may be any useful to postgres I'll report it back.\n\nHave a nice day,\n\n-- Daniele\n",
"msg_date": "Tue, 24 Jul 2012 10:06:15 +0100",
"msg_from": "Daniele Varrazzo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Shards + hash = forever running queries"
}
] |
[
{
"msg_contents": "My mental model of the EXISTS clause must be off. This snippet appears at\nthe end of a series of WITH clauses I suspect are irrelevant:\n\nwith etc etc ... , cids as\n> (select distinct c.id from ddr2 c\n> join claim_entries ce on ce.claim_id = c.id\n> where (c.assigned_ddr = 879\n> or exists (select 1 from ddr_cdt dc\n> where\n> dc.sys_user_id = 879\n> and dc.document_type = c.document_type\n> -- makes it faster: and (dc.cdt_code is null or dc.cdt_code = ce.cpt_code)\n> )))\n>\n> select count(*) from cids\n\n\nIf I uncomment the bit where it says \"make it faster\" I get decent response\nand the graphical analyze display shows the expected user+doctype+cdtcode\nindex is being used (and nice thin lines suggesting efficient lookup).\n\nAs it is, the analyze display shows the expected user+doctype index* being\nused but the lines are fat, and performance is an exponential disaster.\n\n* I created the (to me ) redundant user+doctype index trying to get\nPostgres to Do the Right Thing(tm), but I can see that was not the issue.\n\nI presume the reason performance drops off a cliff is because there can be\n9000 cdt_codes for one user+doctype, but I was hoping EXISTS would just\nlook to see if there was at least one row matching user+doctype and return\nits decision. I have tried select *, select 1, and limit 1 on the nested\nselect to no avail.\n\nAm I just doing something wrong? I am a relative noob. Is there some other\nhint I can give the planner?\n\nThx, ken\n\nMy mental model of the EXISTS clause must be off. This snippet appears at the end of a series of WITH clauses I suspect are irrelevant:\nwith etc etc ... , cids as (select distinct c.id from ddr2 c join claim_entries ce on ce.claim_id = c.id\n where (c.assigned_ddr = 879 or exists (select 1 from ddr_cdt dc where dc.sys_user_id = 879\n and dc.document_type = c.document_type -- makes it faster: and (dc.cdt_code is null or dc.cdt_code = ce.cpt_code) )))\n select count(*) from cidsIf I uncomment the bit where it says \"make it faster\" I get decent response and the graphical analyze display shows the expected user+doctype+cdtcode index is being used (and nice thin lines suggesting efficient lookup).\nAs it is, the analyze display shows the expected user+doctype index* being used but the lines are fat, and performance is an exponential disaster.* I created the (to me ) redundant user+doctype index trying to get Postgres to Do the Right Thing(tm), but I can see that was not the issue.\nI presume the reason performance drops off a cliff is because there can be 9000 cdt_codes for one user+doctype, but I was hoping EXISTS would just look to see if there was at least one row matching user+doctype and return its decision. I have tried select *, select 1, and limit 1 on the nested select to no avail.\nAm I just doing something wrong? I am a relative noob. Is there some other hint I can give the planner?Thx, ken",
"msg_date": "Mon, 23 Jul 2012 14:12:39 -0700",
"msg_from": "Kenneth Tilton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Efficiency of EXISTS?"
},
{
"msg_contents": "On Mon, Jul 23, 2012 at 4:12 PM, Kenneth Tilton <[email protected]> wrote:\n> My mental model of the EXISTS clause must be off. This snippet appears at\n> the end of a series of WITH clauses I suspect are irrelevant:\n>\n>> with etc etc ... , cids as\n>> (select distinct c.id from ddr2 c\n>> join claim_entries ce on ce.claim_id = c.id\n>> where (c.assigned_ddr = 879\n>> or exists (select 1 from ddr_cdt dc\n>> where\n>> dc.sys_user_id = 879\n>> and dc.document_type = c.document_type\n>> -- makes it faster: and (dc.cdt_code is null or dc.cdt_code = ce.cpt_code)\n>> )))\n>>\n>> select count(*) from cids\n>\n>\n> If I uncomment the bit where it says \"make it faster\" I get decent response\n> and the graphical analyze display shows the expected user+doctype+cdtcode\n> index is being used (and nice thin lines suggesting efficient lookup).\n>\n> As it is, the analyze display shows the expected user+doctype index* being\n> used but the lines are fat, and performance is an exponential disaster.\n>\n> * I created the (to me ) redundant user+doctype index trying to get Postgres\n> to Do the Right Thing(tm), but I can see that was not the issue.\n>\n> I presume the reason performance drops off a cliff is because there can be\n> 9000 cdt_codes for one user+doctype, but I was hoping EXISTS would just look\n> to see if there was at least one row matching user+doctype and return its\n> decision. I have tried select *, select 1, and limit 1 on the nested select\n> to no avail.\n>\n> Am I just doing something wrong? I am a relative noob. Is there some other\n> hint I can give the planner?\n\nhard to say without having the explain analyze output. also it's not\nclear why you need to use WITH, at least for the terminating query.\nI'd just do:\n\nselect count(*) from\n(\n inner_query\n)\n\nmerlin\n",
"msg_date": "Mon, 23 Jul 2012 16:52:33 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiency of EXISTS?"
},
{
"msg_contents": "On Mon, Jul 23, 2012 at 2:52 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Mon, Jul 23, 2012 at 4:12 PM, Kenneth Tilton <[email protected]> wrote:\n> > My mental model of the EXISTS clause must be off. This snippet appears at\n> > the end of a series of WITH clauses I suspect are irrelevant:\n> >\n> >> with etc etc ... , cids as\n> >> (select distinct c.id from ddr2 c\n> >> join claim_entries ce on ce.claim_id = c.id\n> >> where (c.assigned_ddr = 879\n> >> or exists (select 1 from ddr_cdt dc\n> >> where\n> >> dc.sys_user_id = 879\n> >> and dc.document_type = c.document_type\n> >> -- makes it faster: and (dc.cdt_code is null or dc.cdt_code =\n> ce.cpt_code)\n> >> )))\n> >>\n> >> select count(*) from cids\n> >\n> >\n> > If I uncomment the bit where it says \"make it faster\" I get decent\n> response\n> > and the graphical analyze display shows the expected user+doctype+cdtcode\n> > index is being used (and nice thin lines suggesting efficient lookup).\n> >\n> > As it is, the analyze display shows the expected user+doctype index*\n> being\n> > used but the lines are fat, and performance is an exponential disaster.\n> >\n> > * I created the (to me ) redundant user+doctype index trying to get\n> Postgres\n> > to Do the Right Thing(tm), but I can see that was not the issue.\n> >\n> > I presume the reason performance drops off a cliff is because there can\n> be\n> > 9000 cdt_codes for one user+doctype, but I was hoping EXISTS would just\n> look\n> > to see if there was at least one row matching user+doctype and return its\n> > decision. I have tried select *, select 1, and limit 1 on the nested\n> select\n> > to no avail.\n> >\n> > Am I just doing something wrong? I am a relative noob. Is there some\n> other\n> > hint I can give the planner?\n>\n> hard to say without having the explain analyze output. also it's not\n> clear why you need to use WITH, at least for the terminating query.\n> I'd just do:\n>\n> select count(*) from\n> (\n> inner_query\n> )\n>\n\nOK. Here is the full query:\n\nwith ddr as (\nselect c.id\n ,case\n when c.portal_user_id is null then u.provider_facility_id\n else pu.provider_facility_id\n end provider_facility_id\n from claims c\n left join sys_users u on u.id = c.created_by\n left join portal_users pu on pu.id = c.portal_user_id\n WHERE c.deleted = 0\n AND c.status >= 0\n AND (c.created_by is not null or c.portal_user_id is not null)\n AND true not in ( select ineligible_code_id in (46,65)\nfrom claim_carcs cc\nwhere c.id = cc.claim_id\nand cc.deleted = 0 )\n AND (false OR c.document_type = 0)\n AND (false OR c.group_plan_id = 44)\n\n limit 1500\n)\n\n,ddr2 as (\nselect c.id\n , c.document_type\n , c.radiographs\n , c.nea_number\n , c.assigned_ddr\n , d.provider_facility_id as submitting_facility_id\n , count(ca.id) as claim_attachments_count\n , cast(exists (select 1 from triples where s = c.id and sda='claims' and p\n= 'ddr-review-passed-on-by') as boolean) as passedon\n from ddr d\ninner join\nclaims c on d.id = c.id\njoin claim_attachments ca on c.id = ca.claim_id\ngroup by\n c.id\n , submitting_facility_id\nhaving ((nullif(trim(c.nea_number, ' '),'') is not null)\nor case transmission_method\nwhen 'P' then count(distinct ca.id) > 1\nelse count(distinct ca.id) > 0\n end\nor c.radiographs > 0))\n\n, cids as\n (select distinct c.id from ddr2 c\njoin claim_entries ce on ce.claim_id = c.id\nwhere (c.assigned_ddr = 879\nor exists (select 1 from ddr_cdt dc\nwhere\ndc.sys_user_id = 879\nand dc.document_type = c.document_type\n--and (dc.cdt_code is null or dc.cdt_code = ce.cpt_code)\n)))\nselect count(*) from cids\n\nAnd the explain output:\n\n\"Aggregate (cost=56060.60..56060.61 rows=1 width=0)\"\n\" CTE ddr\"\n\" -> Limit (cost=306.29..16203.83 rows=1500 width=16)\"\n\" -> Nested Loop Left Join (cost=306.29..7442626.75 rows=702214\nwidth=16)\"\n\" -> Hash Left Join (cost=306.29..7244556.97 rows=702214\nwidth=12)\"\n\" Hash Cond: (c.created_by = u.id)\"\n\" -> Index Scan using claims_lca1 on claims c\n (cost=0.00..7230212.96 rows=702214 width=12)\"\n\" Index Cond: ((deleted = 0) AND (status >= 0)\nAND (group_plan_id = 44) AND (document_type = 0))\"\n\" Filter: (((created_by IS NOT NULL) OR\n(portal_user_id IS NOT NULL)) AND (NOT (SubPlan 1)))\"\n\" SubPlan 1\"\n\" -> Index Scan using claim_carcs_claim_id on\nclaim_carcs cc (cost=0.00..9.23 rows=1 width=4)\"\n\" Index Cond: (c.id = claim_id)\"\n\" Filter: (deleted = 0)\"\n\" -> Hash (cost=224.46..224.46 rows=6546 width=8)\"\n\" -> Seq Scan on sys_users u\n (cost=0.00..224.46 rows=6546 width=8)\"\n\" -> Index Scan using portal_users_pkey on portal_users pu\n (cost=0.00..0.27 rows=1 width=8)\"\n\" Index Cond: (id = c.portal_user_id)\"\n\" CTE ddr2\"\n\" -> GroupAggregate (cost=25714.40..28093.98 rows=286 width=27)\"\n\" Filter: ((NULLIF(btrim((c.nea_number)::text, ' '::text),\n''::text) IS NOT NULL) OR CASE c.transmission_method WHEN 'P'::bpchar THEN\n(count(DISTINCT ca.id) > 1) ELSE (count(DISTINCT ca.id) > 0) END OR\n(c.radiographs > 0))\"\n\" -> Sort (cost=25714.40..25715.11 rows=286 width=27)\"\n\" Sort Key: c.id, d.provider_facility_id\"\n\" -> Nested Loop (cost=0.00..25702.73 rows=286 width=27)\"\n\" -> Nested Loop (cost=0.00..12752.74 rows=1500\nwidth=27)\"\n\" -> CTE Scan on ddr d (cost=0.00..30.00\nrows=1500 width=8)\"\n\" -> Index Scan using claims_pkey on claims c\n (cost=0.00..8.47 rows=1 width=19)\"\n\" Index Cond: (id = d.id)\"\n\" -> Index Scan using claim_attachments_claim on\nclaim_attachments ca (cost=0.00..8.61 rows=2 width=8)\"\n\" Index Cond: (claim_id = c.id)\"\n\" SubPlan 3\"\n\" -> Index Scan using triples_s_idx on triples\n (cost=0.00..8.28 rows=1 width=0)\"\n\" Index Cond: (s = c.id)\"\n\" Filter: ((sda = 'claims'::text) AND (p =\n'ddr-review-passed-on-by'::text))\"\n\" SubPlan 4\"\n\" -> Bitmap Heap Scan on triples (cost=102.70..1010.15\nrows=823 width=8)\"\n\" Recheck Cond: (p = 'ddr-review-passed-on-by'::text)\"\n\" Filter: (sda = 'claims'::text)\"\n\" -> Bitmap Index Scan on triples_p_idx\n (cost=0.00..102.49 rows=3497 width=0)\"\n\" Index Cond: (p = 'ddr-review-passed-on-by'::text)\"\n\" CTE cids\"\n\" -> HashAggregate (cost=11759.51..11760.52 rows=101 width=4)\"\n\" -> Nested Loop (cost=0.00..11722.94 rows=14627 width=4)\"\n\" -> CTE Scan on ddr2 c (cost=0.00..112.75 rows=144\nwidth=4)\"\n\" Filter: ((assigned_ddr = 879) OR (alternatives:\nSubPlan 6 or hashed SubPlan 7))\"\n\" SubPlan 6\"\n\" -> Seq Scan on ddr_cdt dc (cost=0.00..134293.58\nrows=361282 width=0)\"\n\" Filter: ((sys_user_id = 879) AND\n(document_type = c.document_type))\"\n\" SubPlan 7\"\n\" -> Bitmap Heap Scan on ddr_cdt dc\n (cost=20292.74..73868.80 rows=1083845 width=4)\"\n\" Recheck Cond: (sys_user_id = 879)\"\n\" -> Bitmap Index Scan on\n\"ddr-cdt-idx-user-doc\" (cost=0.00..20021.78 rows=1083845 width=0)\"\n\" Index Cond: (sys_user_id = 879)\"\n\" -> Index Scan using claim_entries_claim_id on\nclaim_entries ce (cost=0.00..79.35 rows=102 width=4)\"\n\" Index Cond: (claim_id = c.id)\"\n\" -> CTE Scan on cids (cost=0.00..2.02 rows=101 width=0)\"\n\nMore interesting: I tried reducing the complex query to a simpler query and\nwhat I saw was that my mental model of EXISTS is fine. :) It was efficient\nin the way I expected, and faster than the version that did the last test\n(the cdt_code test). Now I just have to find out why it is slower in vivo.\n\nThx, ken\n\n\n\n-- \nKenneth Tilton\n\n*Director of Software Development*\n\n*MCNA Dental Plans*\n200 West Cypress Creek Road\nSuite 500\nFort Lauderdale, FL 33309\n\n954-730-7131 X181 (Office)\n954-628-3347 (Fax)\n1-800-494-6262 X181 (Toll Free)\n\[email protected] <[email protected]> (Email)\n\nwww.mcna.net (Website)\nCONFIDENTIALITY NOTICE: This electronic mail may contain information that\nis privileged, confidential, and/or otherwise protected from disclosure to\nanyone other than its intended recipient(s). Any dissemination or use of\nthis electronic mail or its contents by persons other than the intended\nrecipient(s) is strictly prohibited. If you have received this\ncommunication in error, please notify the sender immediately by reply\ne-mail so that we may correct our internal records. Please then delete the\noriginal message. Thank you.\n\nOn Mon, Jul 23, 2012 at 2:52 PM, Merlin Moncure <[email protected]> wrote:\nOn Mon, Jul 23, 2012 at 4:12 PM, Kenneth Tilton <[email protected]> wrote:\n> My mental model of the EXISTS clause must be off. This snippet appears at\n> the end of a series of WITH clauses I suspect are irrelevant:\n>\n>> with etc etc ... , cids as\n>> (select distinct c.id from ddr2 c\n>> join claim_entries ce on ce.claim_id = c.id\n>> where (c.assigned_ddr = 879\n>> or exists (select 1 from ddr_cdt dc\n>> where\n>> dc.sys_user_id = 879\n>> and dc.document_type = c.document_type\n>> -- makes it faster: and (dc.cdt_code is null or dc.cdt_code = ce.cpt_code)\n>> )))\n>>\n>> select count(*) from cids\n>\n>\n> If I uncomment the bit where it says \"make it faster\" I get decent response\n> and the graphical analyze display shows the expected user+doctype+cdtcode\n> index is being used (and nice thin lines suggesting efficient lookup).\n>\n> As it is, the analyze display shows the expected user+doctype index* being\n> used but the lines are fat, and performance is an exponential disaster.\n>\n> * I created the (to me ) redundant user+doctype index trying to get Postgres\n> to Do the Right Thing(tm), but I can see that was not the issue.\n>\n> I presume the reason performance drops off a cliff is because there can be\n> 9000 cdt_codes for one user+doctype, but I was hoping EXISTS would just look\n> to see if there was at least one row matching user+doctype and return its\n> decision. I have tried select *, select 1, and limit 1 on the nested select\n> to no avail.\n>\n> Am I just doing something wrong? I am a relative noob. Is there some other\n> hint I can give the planner?\n\nhard to say without having the explain analyze output. also it's not\nclear why you need to use WITH, at least for the terminating query.\nI'd just do:\n\nselect count(*) from\n(\n inner_query\n)OK. Here is the full query:with ddr as (select c.id ,case when c.portal_user_id is null then u.provider_facility_id\n else pu.provider_facility_id end provider_facility_id from claims c left join sys_users u on u.id = c.created_by left join portal_users pu on pu.id = c.portal_user_id\n WHERE c.deleted = 0 AND c.status >= 0 AND (c.created_by is not null or c.portal_user_id is not null) AND true not in ( select ineligible_code_id in (46,65) from claim_carcs cc\n where c.id = cc.claim_id and cc.deleted = 0 )\n AND (false OR c.document_type = 0) AND (false OR c.group_plan_id = 44) limit 1500),ddr2 as (select c.id\n , c.document_type , c.radiographs , c.nea_number , c.assigned_ddr , d.provider_facility_id as submitting_facility_id , count(ca.id) as claim_attachments_count\n , cast(exists (select 1 from triples where s = c.id and sda='claims' and p = 'ddr-review-passed-on-by') as boolean) as passedon from ddr dinner join\n claims c on d.id = c.id join claim_attachments ca on c.id = ca.claim_id\ngroup by c.id , submitting_facility_idhaving ((nullif(trim(c.nea_number, ' '),'') is not null) or case transmission_method\n when 'P' then count(distinct ca.id) > 1 else count(distinct ca.id) > 0\n end or c.radiographs > 0)), cids as (select distinct c.id from ddr2 c\n join claim_entries ce on ce.claim_id = c.id where (c.assigned_ddr = 879\n or exists (select 1 from ddr_cdt dc where dc.sys_user_id = 879\n and dc.document_type = c.document_type --and (dc.cdt_code is null or dc.cdt_code = ce.cpt_code)\n ))) select count(*) from cidsAnd the explain output:\n\"Aggregate (cost=56060.60..56060.61 rows=1 width=0)\"\" CTE ddr\"\" -> Limit (cost=306.29..16203.83 rows=1500 width=16)\"\" -> Nested Loop Left Join (cost=306.29..7442626.75 rows=702214 width=16)\"\n\" -> Hash Left Join (cost=306.29..7244556.97 rows=702214 width=12)\"\" Hash Cond: (c.created_by = u.id)\"\" -> Index Scan using claims_lca1 on claims c (cost=0.00..7230212.96 rows=702214 width=12)\"\n\" Index Cond: ((deleted = 0) AND (status >= 0) AND (group_plan_id = 44) AND (document_type = 0))\"\" Filter: (((created_by IS NOT NULL) OR (portal_user_id IS NOT NULL)) AND (NOT (SubPlan 1)))\"\n\" SubPlan 1\"\" -> Index Scan using claim_carcs_claim_id on claim_carcs cc (cost=0.00..9.23 rows=1 width=4)\"\" Index Cond: (c.id = claim_id)\"\n\" Filter: (deleted = 0)\"\" -> Hash (cost=224.46..224.46 rows=6546 width=8)\"\" -> Seq Scan on sys_users u (cost=0.00..224.46 rows=6546 width=8)\"\n\" -> Index Scan using portal_users_pkey on portal_users pu (cost=0.00..0.27 rows=1 width=8)\"\" Index Cond: (id = c.portal_user_id)\"\" CTE ddr2\"\n\" -> GroupAggregate (cost=25714.40..28093.98 rows=286 width=27)\"\" Filter: ((NULLIF(btrim((c.nea_number)::text, ' '::text), ''::text) IS NOT NULL) OR CASE c.transmission_method WHEN 'P'::bpchar THEN (count(DISTINCT ca.id) > 1) ELSE (count(DISTINCT ca.id) > 0) END OR (c.radiographs > 0))\"\n\" -> Sort (cost=25714.40..25715.11 rows=286 width=27)\"\" Sort Key: c.id, d.provider_facility_id\"\" -> Nested Loop (cost=0.00..25702.73 rows=286 width=27)\"\n\" -> Nested Loop (cost=0.00..12752.74 rows=1500 width=27)\"\" -> CTE Scan on ddr d (cost=0.00..30.00 rows=1500 width=8)\"\n\" -> Index Scan using claims_pkey on claims c (cost=0.00..8.47 rows=1 width=19)\"\" Index Cond: (id = d.id)\"\n\" -> Index Scan using claim_attachments_claim on claim_attachments ca (cost=0.00..8.61 rows=2 width=8)\"\" Index Cond: (claim_id = c.id)\"\n\" SubPlan 3\"\" -> Index Scan using triples_s_idx on triples (cost=0.00..8.28 rows=1 width=0)\"\" Index Cond: (s = c.id)\"\n\" Filter: ((sda = 'claims'::text) AND (p = 'ddr-review-passed-on-by'::text))\"\" SubPlan 4\"\" -> Bitmap Heap Scan on triples (cost=102.70..1010.15 rows=823 width=8)\"\n\" Recheck Cond: (p = 'ddr-review-passed-on-by'::text)\"\" Filter: (sda = 'claims'::text)\"\" -> Bitmap Index Scan on triples_p_idx (cost=0.00..102.49 rows=3497 width=0)\"\n\" Index Cond: (p = 'ddr-review-passed-on-by'::text)\"\" CTE cids\"\" -> HashAggregate (cost=11759.51..11760.52 rows=101 width=4)\"\n\" -> Nested Loop (cost=0.00..11722.94 rows=14627 width=4)\"\" -> CTE Scan on ddr2 c (cost=0.00..112.75 rows=144 width=4)\"\" Filter: ((assigned_ddr = 879) OR (alternatives: SubPlan 6 or hashed SubPlan 7))\"\n\" SubPlan 6\"\" -> Seq Scan on ddr_cdt dc (cost=0.00..134293.58 rows=361282 width=0)\"\" Filter: ((sys_user_id = 879) AND (document_type = c.document_type))\"\n\" SubPlan 7\"\" -> Bitmap Heap Scan on ddr_cdt dc (cost=20292.74..73868.80 rows=1083845 width=4)\"\" Recheck Cond: (sys_user_id = 879)\"\n\" -> Bitmap Index Scan on \"ddr-cdt-idx-user-doc\" (cost=0.00..20021.78 rows=1083845 width=0)\"\" Index Cond: (sys_user_id = 879)\"\n\" -> Index Scan using claim_entries_claim_id on claim_entries ce (cost=0.00..79.35 rows=102 width=4)\"\" Index Cond: (claim_id = c.id)\"\n\" -> CTE Scan on cids (cost=0.00..2.02 rows=101 width=0)\"More interesting: I tried reducing the complex query to a simpler query and what I saw was that my mental model of EXISTS is fine. :) It was efficient in the way I expected, and faster than the version that did the last test (the cdt_code test). Now I just have to find out why it is slower in vivo.\nThx, ken-- Kenneth TiltonDirector of Software DevelopmentMCNA Dental Plans\n200 West Cypress Creek RoadSuite 500Fort Lauderdale, FL 33309954-730-7131 X181 (Office)954-628-3347 (Fax)\n1-800-494-6262 X181 (Toll Free)[email protected] (Email)\nwww.mcna.net (Website)\nCONFIDENTIALITY NOTICE: This electronic mail may contain information that is privileged, confidential, and/or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents by persons other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately by reply e-mail so that we may correct our internal records. Please then delete the original message. Thank you.",
"msg_date": "Mon, 23 Jul 2012 15:12:54 -0700",
"msg_from": "Kenneth Tilton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Efficiency of EXISTS?"
}
] |
[
{
"msg_contents": "Hi \n\n\nWe have noticed something strange in the interaction between our Geoserver instance and Postgres/PostGIS. \n\n\nAfter setting Geoserver's log level to include developer debugging, I managed to capture a single request from Geoserver WMS to PostGIS. \n\n\nThe (shortened) sequence of events and their timestamps: \n\n\n12:31:22,658 - SELECT query for MSG is sent to Postgres \n12:32:10,315 - Rendering for MSG layer starts \n12:32:10,356 - DB Connection Closed \n======== \n~ 48 seconds \n\n\nInterestingly enough, when I execute the same query (MSG) directly from PgAdmin3: \n\n\nSELECT \"frp_mw\",encode(ST_AsBinary(ST_Force_2D(\"the_geom\")),'base64') as \"the_geom\" FROM \"public\".\"af_msg_abba_datetime_today\" WHERE (\"the_geom\" && GeometryFromText('POLYGON ((-27.67968749408379 -46.92207325648429, -27.67968749408379 -6.186892358058866, 75.67968748740275 -6.186892358058866, 75.67968748740275 -46.92207325648429, -27.67968749408379 -46.92207325648429))', 4326) AND ((\"frp_mw\" >= -1 AND \"frp_mw\" <= 150) OR (\"frp_mw\" >= 151 AND \"frp_mw\" <= 300) OR (\"frp_mw\" >= 301 AND \"frp_mw\" <= 600) OR (\"frp_mw\" >= 601 AND \"frp_mw\" <= 50000))); \n\n\nI get 6515 rows in 380 ms. \n\n\nIe Postgres is able to return the results of the query within 380ms if queried from PgAdmin3 but Geoserver takes about 48 seconds to get hold of the same resultset. \n\n\nIs this some kind of JDBC problem perhaps? \n\n\nSome details about our setup: \n\n\nMaster Postgres database is on a separate VM from Geoserver, but we replicate to a slave Postgres cluster on the Geoserver VM (same host). So Geoserver is referencing the 'localhost' read-only Postgres cluster for its queries. \n\n\nThe 380 ms response time shown above was from the slave Postgres cluster, same one that Geoserver is using. \n\n\nAll Linux (Ubuntu 11.10) based. Postgres 9.1 PostGIS 1.5 Geoserver 2.1.3 \n\n\nRiaan \n\n\n\n\n\n\n-- \nThis message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. \nThe full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.\n\nThis message has been scanned for viruses and dangerous content by MailScanner, \nand is believed to be clean.\n\nPlease consider the environment before printing this email.\n\n\n\n\n\n\n\n\nHi \n\n\n\n\nWe have noticed something strange in the interaction between our Geoserver instance and Postgres/PostGIS. \n\n\n\n\nAfter setting Geoserver's log level to include developer debugging, I managed to capture a single request from Geoserver WMS to PostGIS. \n\n\n\n\nThe (shortened) sequence of events and their timestamps: \n\n\n\n\n12:31:22,658 - SELECT query for MSG is sent to Postgres \n\n12:32:10,315 - Rendering for MSG layer starts \n\n12:32:10,356 - DB Connection Closed \n\n======== \n\n~ 48 seconds \n\n\n\n\nInterestingly enough, when I execute the same query (MSG) directly from PgAdmin3: \n\n\n\n\nSELECT \"frp_mw\",encode(ST_AsBinary(ST_Force_2D(\"the_geom\")),'base64') as \"the_geom\" FROM \"public\".\"af_msg_abba_datetime_today\" WHERE (\"the_geom\" && GeometryFromText('POLYGON ((-27.67968749408379 -46.92207325648429, -27.67968749408379 -6.186892358058866, 75.67968748740275 -6.186892358058866, 75.67968748740275 -46.92207325648429, -27.67968749408379 -46.92207325648429))', 4326) AND ((\"frp_mw\" >= -1 AND \"frp_mw\" <= 150) OR (\"frp_mw\" >= 151 AND \"frp_mw\" <= 300) OR (\"frp_mw\" >= 301 AND \"frp_mw\" <= 600) OR (\"frp_mw\" >= 601 AND \"frp_mw\" <= 50000))); \n\n\n\n\nI get 6515 rows in 380 ms. \n\n\n\n\nIe Postgres is able to return the results of the query within 380ms if queried from PgAdmin3 but Geoserver takes about 48 seconds to get hold of the same resultset. \n\n\n\n\nIs this some kind of JDBC problem perhaps? \n\n\n\n\nSome details about our setup: \n\n\n\n\nMaster Postgres database is on a separate VM from Geoserver, but we replicate to a slave Postgres cluster on the Geoserver VM (same host). So Geoserver is referencing the 'localhost' read-only Postgres cluster for its queries. \n\n\n\n\nThe 380 ms response time shown above was from the slave Postgres cluster, same one that Geoserver is using. \n\n\n\n\nAll Linux (Ubuntu 11.10) based. Postgres 9.1 PostGIS 1.5 Geoserver 2.1.3 \n\n\n\n\nRiaan \n\n\n\n\n\n\n\n\n-- \nThis message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard.\nThe full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.\n\nThis message has been scanned for viruses and dangerous content by MailScanner, \nand is believed to be clean.\n\nPlease consider the environment before printing this email.",
"msg_date": "Tue, 24 Jul 2012 09:21:17 +0200",
"msg_from": "\"Riaan van den Dool\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Geoserver-PostGIS performance problems"
},
{
"msg_contents": "Hi Riaan,\nI am familiar if the Geoserver/Geotools package and I believe that the problem is not Postgres/PostGIS but rather Geoserver.\nThe DB Connection Closed message is not sent at the end of the query, but rather at the end of the rendering. There is more than just querying happening between the Select message and the Closed message.\nBrett\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Riaan van den Dool\nSent: Tuesday, 24 July 2012 5:21 PM\nTo: [email protected]\nSubject: [PERFORM] Geoserver-PostGIS performance problems\n\n\n\nHi\n\n\n\nWe have noticed something strange in the interaction between our Geoserver instance and Postgres/PostGIS.\n\n\n\nAfter setting Geoserver's log level to include developer debugging, I managed to capture a single request from Geoserver WMS to PostGIS.\n\n\n\nThe (shortened) sequence of events and their timestamps:\n\n\n\n12:31:22,658 - SELECT query for MSG is sent to Postgres\n\n12:32:10,315 - Rendering for MSG layer starts\n\n12:32:10,356 - DB Connection Closed\n\n========\n\n~ 48 seconds\n\n\n\nInterestingly enough, when I execute the same query (MSG) directly from PgAdmin3:\n\n\n\nSELECT \"frp_mw\",encode(ST_AsBinary(ST_Force_2D(\"the_geom\")),'base64') as \"the_geom\" FROM \"public\".\"af_msg_abba_datetime_today\" WHERE (\"the_geom\" && GeometryFromText('POLYGON ((-27.67968749408379 -46.92207325648429, -27.67968749408379 -6.186892358058866, 75.67968748740275 -6.186892358058866, 75.67968748740275 -46.92207325648429, -27.67968749408379 -46.92207325648429))', 4326) AND ((\"frp_mw\" >= -1 AND \"frp_mw\" <= 150) OR (\"frp_mw\" >= 151 AND \"frp_mw\" <= 300) OR (\"frp_mw\" >= 301 AND \"frp_mw\" <= 600) OR (\"frp_mw\" >= 601 AND \"frp_mw\" <= 50000)));\n\n\n\nI get 6515 rows in 380 ms.\n\n\n\nIe Postgres is able to return the results of the query within 380ms if queried from PgAdmin3 but Geoserver takes about 48 seconds to get hold of the same resultset.\n\n\n\nIs this some kind of JDBC problem perhaps?\n\n\n\nSome details about our setup:\n\n\n\nMaster Postgres database is on a separate VM from Geoserver, but we replicate to a slave Postgres cluster on the Geoserver VM (same host). So Geoserver is referencing the 'localhost' read-only Postgres cluster for its queries.\n\n\n\nThe 380 ms response time shown above was from the slave Postgres cluster, same one that Geoserver is using.\n\n\n\nAll Linux (Ubuntu 11.10) based. Postgres 9.1 PostGIS 1.5 Geoserver 2.1.3\n\n\n\nRiaan\n\n\n\n\n\n--\nThis message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard.\nThe full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.\n\nThis message has been scanned for viruses and dangerous content by MailScanner<http://www.mailscanner.info/>,\nand is believed to be clean.\n\nPlease consider the environment before printing this email.\n\nHi Riaan,I am familiar if the Geoserver/Geotools package and I believe that the problem is not Postgres/PostGIS but rather Geoserver.The DB Connection Closed message is not sent at the end of the query, but rather at the end of the rendering. There is more than just querying happening between the Select message and the Closed message.BrettFrom: [email protected] [mailto:[email protected]] On Behalf Of Riaan van den DoolSent: Tuesday, 24 July 2012 5:21 PMTo: [email protected]: [PERFORM] Geoserver-PostGIS performance problems Hi We have noticed something strange in the interaction between our Geoserver instance and Postgres/PostGIS. After setting Geoserver's log level to include developer debugging, I managed to capture a single request from Geoserver WMS to PostGIS. The (shortened) sequence of events and their timestamps: 12:31:22,658 - SELECT query for MSG is sent to Postgres 12:32:10,315 - Rendering for MSG layer starts 12:32:10,356 - DB Connection Closed ======== ~ 48 seconds Interestingly enough, when I execute the same query (MSG) directly from PgAdmin3: SELECT \"frp_mw\",encode(ST_AsBinary(ST_Force_2D(\"the_geom\")),'base64') as \"the_geom\" FROM \"public\".\"af_msg_abba_datetime_today\" WHERE (\"the_geom\" && GeometryFromText('POLYGON ((-27.67968749408379 -46.92207325648429, -27.67968749408379 -6.186892358058866, 75.67968748740275 -6.186892358058866, 75.67968748740275 -46.92207325648429, -27.67968749408379 -46.92207325648429))', 4326) AND ((\"frp_mw\" >= -1 AND \"frp_mw\" <= 150) OR (\"frp_mw\" >= 151 AND \"frp_mw\" <= 300) OR (\"frp_mw\" >= 301 AND \"frp_mw\" <= 600) OR (\"frp_mw\" >= 601 AND \"frp_mw\" <= 50000))); I get 6515 rows in 380 ms. Ie Postgres is able to return the results of the query within 380ms if queried from PgAdmin3 but Geoserver takes about 48 seconds to get hold of the same resultset. Is this some kind of JDBC problem perhaps? Some details about our setup: Master Postgres database is on a separate VM from Geoserver, but we replicate to a slave Postgres cluster on the Geoserver VM (same host). So Geoserver is referencing the 'localhost' read-only Postgres cluster for its queries. The 380 ms response time shown above was from the slave Postgres cluster, same one that Geoserver is using. All Linux (Ubuntu 11.10) based. Postgres 9.1 PostGIS 1.5 Geoserver 2.1.3 Riaan -- This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. Please consider the environment before printing this email.",
"msg_date": "Tue, 24 Jul 2012 17:54:49 +1000",
"msg_from": "Brett Walker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "Thank you for this response. \n\nIt turns out our performance problems were solved when I switched off 'Prepared statements' in Geoserver for the PostGIS data store. It makes quite a huge difference. \n\nRiaan\n\n>>> Brett Walker <[email protected]> 7/24/2012 09:54 AM >>>\n\nHi Riaan, \nI am familiar if the Geoserver/Geotools package and I believe that the problem is not Postgres/PostGIS but rather Geoserver. \nThe DB Connection Closed message is not sent at the end of the query, but rather at the end of the rendering. There is more than just querying happening between the Select message and the Closed message. \nBrett \n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Riaan van den Dool\nSent: Tuesday, 24 July 2012 5:21 PM\nTo: [email protected]\nSubject: [PERFORM] Geoserver-PostGIS performance problems \n\n \n \nHi \n \nWe have noticed something strange in the interaction between our Geoserver instance and Postgres/PostGIS. \n \nAfter setting Geoserver's log level to include developer debugging, I managed to capture a single request from Geoserver WMS to PostGIS. \n \nThe (shortened) sequence of events and their timestamps: \n \n12:31:22,658 - SELECT query for MSG is sent to Postgres \n12:32:10,315 - Rendering for MSG layer starts \n12:32:10,356 - DB Connection Closed \n======== \n~ 48 seconds \n \nInterestingly enough, when I execute the same query (MSG) directly from PgAdmin3: \n \nSELECT \"frp_mw\",encode(ST_AsBinary(ST_Force_2D(\"the_geom\")),'base64') as \"the_geom\" FROM \"public\".\"af_msg_abba_datetime_today\" WHERE (\"the_geom\" && GeometryFromText('POLYGON ((-27.67968749408379 -46.92207325648429, -27.67968749408379 -6.186892358058866, 75.67968748740275 -6.186892358058866, 75.67968748740275 -46.92207325648429, -27.67968749408379 -46.92207325648429))', 4326) AND ((\"frp_mw\" >= -1 AND \"frp_mw\" <= 150) OR (\"frp_mw\" >= 151 AND \"frp_mw\" <= 300) OR (\"frp_mw\" >= 301 AND \"frp_mw\" <= 600) OR (\"frp_mw\" >= 601 AND \"frp_mw\" <= 50000))); \n \nI get 6515 rows in 380 ms. \n \nIe Postgres is able to return the results of the query within 380ms if queried from PgAdmin3 but Geoserver takes about 48 seconds to get hold of the same resultset. \n \nIs this some kind of JDBC problem perhaps? \n \nSome details about our setup: \n \nMaster Postgres database is on a separate VM from Geoserver, but we replicate to a slave Postgres cluster on the Geoserver VM (same host). So Geoserver is referencing the 'localhost' read-only Postgres cluster for its queries. \n \nThe 380 ms response time shown above was from the slave Postgres cluster, same one that Geoserver is using. \n \nAll Linux (Ubuntu 11.10) based. Postgres 9.1 PostGIS 1.5 Geoserver 2.1.3 \n \nRiaan \n\n\n\n\n\n\n--\nThis message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard.\nThe full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. \n\nThis message has been scanned for viruses and dangerous content by MailScanner ( http://www.mailscanner.info/ ),\nand is believed to be clean. \n\nPlease consider the environment before printing this email. \n\n\n--\nThis message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard.\nThe full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. \n\nThis message has been scanned for viruses and dangerous content by MailScanner ( http://www.mailscanner.info/ ),\nand is believed to be clean. \n\nPlease consider the environment before printing this email.\n\n-- \nThis message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. \nThe full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.\n\nThis message has been scanned for viruses and dangerous content by MailScanner, \nand is believed to be clean.\n\nPlease consider the environment before printing this email.\n\n\n\n\n\n\n\n\nThank you for this response. \n\n\nIt turns out our performance problems were solved when I switched off 'Prepared statements' in Geoserver for the PostGIS data store. It makes quite a huge difference. \n\n\nRiaan>>> Brett Walker <[email protected]> 7/24/2012 09:54 AM >>> \n\n\nHi Riaan, \n\nI am familiar if the Geoserver/Geotools package and I believe that the problem is not Postgres/PostGIS but rather Geoserver. \n\nThe DB Connection Closed message is not sent at the end of the query, but rather at the end of the rendering. There is more than just querying happening between the Select message and the Closed message. \n\nBrett \n\n\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Riaan van den DoolSent: Tuesday, 24 July 2012 5:21 PMTo: [email protected]: [PERFORM] Geoserver-PostGIS performance problems \n\n\n\n\n \n\n\n \n\nHi \n\n\n \n\nWe have noticed something strange in the interaction between our Geoserver instance and Postgres/PostGIS. \n\n\n \n\nAfter setting Geoserver's log level to include developer debugging, I managed to capture a single request from Geoserver WMS to PostGIS. \n\n\n \n\nThe (shortened) sequence of events and their timestamps: \n\n\n \n\n12:31:22,658 - SELECT query for MSG is sent to Postgres \n\n12:32:10,315 - Rendering for MSG layer starts \n\n12:32:10,356 - DB Connection Closed \n\n======== \n\n~ 48 seconds \n\n\n \n\nInterestingly enough, when I execute the same query (MSG) directly from PgAdmin3: \n\n\n \n\nSELECT \"frp_mw\",encode(ST_AsBinary(ST_Force_2D(\"the_geom\")),'base64') as \"the_geom\" FROM \"public\".\"af_msg_abba_datetime_today\" WHERE (\"the_geom\" && GeometryFromText('POLYGON ((-27.67968749408379 -46.92207325648429, -27.67968749408379 -6.186892358058866, 75.67968748740275 -6.186892358058866, 75.67968748740275 -46.92207325648429, -27.67968749408379 -46.92207325648429))', 4326) AND ((\"frp_mw\" >= -1 AND \"frp_mw\" <= 150) OR (\"frp_mw\" >= 151 AND \"frp_mw\" <= 300) OR (\"frp_mw\" >= 301 AND \"frp_mw\" <= 600) OR (\"frp_mw\" >= 601 AND \"frp_mw\" <= 50000))); \n\n\n \n\nI get 6515 rows in 380 ms. \n\n\n \n\nIe Postgres is able to return the results of the query within 380ms if queried from PgAdmin3 but Geoserver takes about 48 seconds to get hold of the same resultset. \n\n\n \n\nIs this some kind of JDBC problem perhaps? \n\n\n \n\nSome details about our setup: \n\n\n \n\nMaster Postgres database is on a separate VM from Geoserver, but we replicate to a slave Postgres cluster on the Geoserver VM (same host). So Geoserver is referencing the 'localhost' read-only Postgres cluster for its queries. \n\n\n \n\nThe 380 ms response time shown above was from the slave Postgres cluster, same one that Geoserver is using. \n\n\n \n\nAll Linux (Ubuntu 11.10) based. Postgres 9.1 PostGIS 1.5 Geoserver 2.1.3 \n\n\n \n\nRiaan \n\n\n\n\n\n\n\n\n\n\n --This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard.The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. \n\n\n This message has been scanned for viruses and dangerous content by MailScanner,and is believed to be clean. \n\n\n Please consider the environment before printing this email. \n\n\n\n --This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard.The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html. \n\n\n This message has been scanned for viruses and dangerous content by MailScanner,and is believed to be clean. \n\n\n Please consider the environment before printing this email.\n\n\n-- \nThis message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard.\nThe full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.\n\nThis message has been scanned for viruses and dangerous content by MailScanner, \nand is believed to be clean.\n\nPlease consider the environment before printing this email.",
"msg_date": "Tue, 24 Jul 2012 10:10:00 +0200",
"msg_from": "\"Riaan van den Dool\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "This may be another issue of the problem discussed here:\nhttp://postgresql.1045698.n5.nabble.com/avoid-prepared-statements-on-complex-queries-td4996363.html\n(Kris Jurka explains the crux of it in that thread).\n\nNote that it seems the preparing/planning interaction was not the\nposter's actual problem, but it may have been yours. As Tom Lane notes\nin that thread, this should get better in 9.2.\n",
"msg_date": "Tue, 24 Jul 2012 08:50:42 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "On Tue, Jul 24, 2012 at 10:50 AM, Maciek Sakrejda <[email protected]> wrote:\n> This may be another issue of the problem discussed here:\n> http://postgresql.1045698.n5.nabble.com/avoid-prepared-statements-on-complex-queries-td4996363.html\n> (Kris Jurka explains the crux of it in that thread).\n>\n> Note that it seems the preparing/planning interaction was not the\n> poster's actual problem, but it may have been yours. As Tom Lane notes\n> in that thread, this should get better in 9.2.\n\njdbc should get some blame too -- it's really aggressive about\npreparing queries.\n\nmerlin\n",
"msg_date": "Wed, 25 Jul 2012 13:45:44 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "On Wed, Jul 25, 2012 at 3:45 PM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Jul 24, 2012 at 10:50 AM, Maciek Sakrejda <[email protected]> wrote:\n>> This may be another issue of the problem discussed here:\n>> http://postgresql.1045698.n5.nabble.com/avoid-prepared-statements-on-complex-queries-td4996363.html\n>> (Kris Jurka explains the crux of it in that thread).\n>>\n>> Note that it seems the preparing/planning interaction was not the\n>> poster's actual problem, but it may have been yours. As Tom Lane notes\n>> in that thread, this should get better in 9.2.\n>\n> jdbc should get some blame too -- it's really aggressive about\n> preparing queries.\n>\n\nindeed!\nIs there any reason for that?\n",
"msg_date": "Wed, 25 Jul 2012 16:17:38 -0300",
"msg_from": "Vinicius Abrahao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "On Wed, Jul 25, 2012 at 2:17 PM, Vinicius Abrahao <[email protected]> wrote:\n> On Wed, Jul 25, 2012 at 3:45 PM, Merlin Moncure <[email protected]> wrote:\n>>> Note that it seems the preparing/planning interaction was not the\n>>> poster's actual problem, but it may have been yours. As Tom Lane notes\n>>> in that thread, this should get better in 9.2.\n>>\n>> jdbc should get some blame too -- it's really aggressive about\n>> preparing queries.\n>>\n>\n> indeed!\n> Is there any reason for that?\n\nIMNSHO it's an oversight in the core JDBC design dating back to the\nbeginning: you have two basic choices for executing SQL. The\nunparameterized Statement or the parameterized PreparedStatement.\nThere should have been a 'ParamaterizedStatement' that gave the\nexpectation of paramaterization without setting up and permanent\nserver side structures to handle the query; libpq makes this\ndistinction and it works very well. Of course, there are various ways\nto work around this but the point stands.\n\nmerlin\n",
"msg_date": "Wed, 25 Jul 2012 14:26:15 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "On Wed, Jul 25, 2012 at 4:26 PM, Merlin Moncure <[email protected]> wrote:\n> On Wed, Jul 25, 2012 at 2:17 PM, Vinicius Abrahao <[email protected]> wrote:\n>> On Wed, Jul 25, 2012 at 3:45 PM, Merlin Moncure <[email protected]> wrote:\n>>>> Note that it seems the preparing/planning interaction was not the\n>>>> poster's actual problem, but it may have been yours. As Tom Lane notes\n>>>> in that thread, this should get better in 9.2.\n>>>\n>>> jdbc should get some blame too -- it's really aggressive about\n>>> preparing queries.\n>>>\n>>\n>> indeed!\n>> Is there any reason for that?\n>\n> IMNSHO it's an oversight in the core JDBC design dating back to the\n> beginning: you have two basic choices for executing SQL. The\n> unparameterized Statement or the parameterized PreparedStatement.\n> There should have been a 'ParamaterizedStatement' that gave the\n> expectation of paramaterization without setting up and permanent\n> server side structures to handle the query; libpq makes this\n> distinction and it works very well. Of course, there are various ways\n> to work around this but the point stands.\n>\n\nThat is true, I was observing the same, days ago:\n\nRunning queries and statments in jdbc:\nhttps://github.com/vinnix/JavaLab/blob/master/Scrollable.java\n\nAnd running queries with libpq:\nhttps://github.com/vinnix/testLibPQ/blob/master/testlibpq.c\n\nIs this possible to change something (I really don't know what or\nwhere) in the jdbc driver\nto get more direct aproach? (if that's make any sense to you guys...)\n\nBest regards,\n\nvinnix\n",
"msg_date": "Wed, 25 Jul 2012 16:59:29 -0300",
"msg_from": "Vinicius Abrahao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "On Wed, Jul 25, 2012 at 2:59 PM, Vinicius Abrahao <[email protected]> wrote:\n> On Wed, Jul 25, 2012 at 4:26 PM, Merlin Moncure <[email protected]> wrote:\n>> On Wed, Jul 25, 2012 at 2:17 PM, Vinicius Abrahao <[email protected]> wrote:\n>>> On Wed, Jul 25, 2012 at 3:45 PM, Merlin Moncure <[email protected]> wrote:\n>>>>> Note that it seems the preparing/planning interaction was not the\n>>>>> poster's actual problem, but it may have been yours. As Tom Lane notes\n>>>>> in that thread, this should get better in 9.2.\n>>>>\n>>>> jdbc should get some blame too -- it's really aggressive about\n>>>> preparing queries.\n>>>>\n>>>\n>>> indeed!\n>>> Is there any reason for that?\n>>\n>> IMNSHO it's an oversight in the core JDBC design dating back to the\n>> beginning: you have two basic choices for executing SQL. The\n>> unparameterized Statement or the parameterized PreparedStatement.\n>> There should have been a 'ParamaterizedStatement' that gave the\n>> expectation of paramaterization without setting up and permanent\n>> server side structures to handle the query; libpq makes this\n>> distinction and it works very well. Of course, there are various ways\n>> to work around this but the point stands.\n>>\n>\n> That is true, I was observing the same, days ago:\n>\n> Running queries and statments in jdbc:\n> https://github.com/vinnix/JavaLab/blob/master/Scrollable.java\n>\n> And running queries with libpq:\n> https://github.com/vinnix/testLibPQ/blob/master/testlibpq.c\n>\n> Is this possible to change something (I really don't know what or\n> where) in the jdbc driver\n> to get more direct aproach? (if that's make any sense to you guys...)\n\nyou can disable server-side preparing in the url or as library\nsetting. see here:\n\"jdbc:postgresql://localhost:5432/test?prepareThreshold=3\";\n\nunfortunately postgres jdbc is bugged and does not honor the above for\ntransaction control commands (begin, commit, etc). This patch\nhttp://treehou.se/~omar/postgresql-jdbc-8.4-701-pgbouncer_txn.patch\nwill fix it, assuming it hasn't been fixed in recent postgres jdbc.\n\nmerlin\n",
"msg_date": "Wed, 25 Jul 2012 15:04:00 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "Why not just use simple Statement instead of PreparedStatement and \nconstruct the SQL with concated string or StringBuilder? like this:\nint col1=xxx;\nString col2=\"xxxx\";\nString sql=\"select * from table where col1=\"+col+\" and col2='\"+col2+\"'\";\n\n于 2012/7/26 3:59, Vinicius Abrahao 写道:\n> On Wed, Jul 25, 2012 at 4:26 PM, Merlin Moncure <[email protected]> wrote:\n>> On Wed, Jul 25, 2012 at 2:17 PM, Vinicius Abrahao <[email protected]> wrote:\n>>> On Wed, Jul 25, 2012 at 3:45 PM, Merlin Moncure <[email protected]> wrote:\n>>>>> Note that it seems the preparing/planning interaction was not the\n>>>>> poster's actual problem, but it may have been yours. As Tom Lane notes\n>>>>> in that thread, this should get better in 9.2.\n>>>> jdbc should get some blame too -- it's really aggressive about\n>>>> preparing queries.\n>>>>\n>>> indeed!\n>>> Is there any reason for that?\n>> IMNSHO it's an oversight in the core JDBC design dating back to the\n>> beginning: you have two basic choices for executing SQL. The\n>> unparameterized Statement or the parameterized PreparedStatement.\n>> There should have been a 'ParamaterizedStatement' that gave the\n>> expectation of paramaterization without setting up and permanent\n>> server side structures to handle the query; libpq makes this\n>> distinction and it works very well. Of course, there are various ways\n>> to work around this but the point stands.\n>>\n> That is true, I was observing the same, days ago:\n>\n> Running queries and statments in jdbc:\n> https://github.com/vinnix/JavaLab/blob/master/Scrollable.java\n>\n> And running queries with libpq:\n> https://github.com/vinnix/testLibPQ/blob/master/testlibpq.c\n>\n> Is this possible to change something (I really don't know what or\n> where) in the jdbc driver\n> to get more direct aproach? (if that's make any sense to you guys...)\n>\n> Best regards,\n>\n> vinnix\n>\n\n",
"msg_date": "Thu, 26 Jul 2012 10:13:48 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "> unfortunately postgres jdbc is bugged and does not honor the above for\n> transaction control commands (begin, commit, etc). This patch\n> http://treehou.se/~omar/postgresql-jdbc-8.4-701-pgbouncer_txn.patch\n> will fix it, assuming it hasn't been fixed in recent postgres jdbc.\n\nLooks like it's still an issue:\nhttps://github.com/pgjdbc/pgjdbc/blob/master/org/postgresql/core/v3/QueryExecutorImpl.java#L426\n\nAlthough I don't quite follow why it's an issue in the first\nplace--isn't the point to avoid creating a plan with parameter markers\nbut not actual parameter information? BEGIN, COMMIT, et al never have\nmarkers in the first place. What am I missing?\n",
"msg_date": "Wed, 25 Jul 2012 23:34:17 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "On Wed, Jul 25, 2012 at 7:13 PM, Rural Hunter <[email protected]> wrote:\n> Why not just use simple Statement instead of PreparedStatement and construct\n> the SQL with concated string or StringBuilder? like this:\n> int col1=xxx;\n> String col2=\"xxxx\";\n> String sql=\"select * from table where col1=\"+col+\" and col2='\"+col2+\"'\";\n\nAh, finally get to apply the old there's-an-xkcd-for-that rule here:\nhttp://xkcd.com/327/\n\nOr, more informatively: http://en.wikipedia.org/wiki/SQL_injection\n\nNote that it's not completely crazy (in fact, the JDBC driver used to\nthis this forever ago): if you know what you're doing, you *can*\nsafely escape strings and avoid injection. But it's not for the faint\nof heart.\n\nAlso, if you control the parameters and can verify that escaping is\nnot (and will never be) necessary over the domain of their possible\nvalues, that's another option.\n\nBut in general, it's safer to let drivers worry about this.\n",
"msg_date": "Wed, 25 Jul 2012 23:51:33 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
},
{
"msg_contents": "On Thu, Jul 26, 2012 at 1:34 AM, Maciek Sakrejda <[email protected]> wrote:\n>> unfortunately postgres jdbc is bugged and does not honor the above for\n>> transaction control commands (begin, commit, etc). This patch\n>> http://treehou.se/~omar/postgresql-jdbc-8.4-701-pgbouncer_txn.patch\n>> will fix it, assuming it hasn't been fixed in recent postgres jdbc.\n>\n> Looks like it's still an issue:\n> https://github.com/pgjdbc/pgjdbc/blob/master/org/postgresql/core/v3/QueryExecutorImpl.java#L426\n>\n> Although I don't quite follow why it's an issue in the first\n> place--isn't the point to avoid creating a plan with parameter markers\n> but not actual parameter information? BEGIN, COMMIT, et al never have\n> markers in the first place. What am I missing?\n\nThis causes problems for connection poolers. (see;\nhttp://pgbouncer.projects.postgresql.org/doc/faq.html#_disabling_prepared_statements_in_jdbc).\n\nmerlin\n",
"msg_date": "Thu, 26 Jul 2012 08:32:21 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Geoserver-PostGIS performance problems"
}
] |
[
{
"msg_contents": "Hi,\n\nI was testing a query to delete duplicates to see how well using ctid works if the table doesn't have a unique identifier available.\n\nThe table definition is:\n\ncreate table dupes\n(\n id integer primary key,\n first_name text,\n last_name text\n);\n\nMy test table has 100.000 rows with ~13000 being actually unique.\n\nThe following statement:\n\nDELETE FROM dupes\nWHERE id NOT IN (SELECT min(b.id)\n FROM dupes b\n GROUP BY first_name, last_Name\n HAVING count(*) > 1);\n\nproduces a quite nice execution plan:\n\nDelete on public.dupes (cost=2770.00..4640.00 rows=50000 width=6) (actual time=299.809..299.809 rows=0 loops=1)\n Buffers: shared hit=88100\n -> Seq Scan on public.dupes (cost=2770.00..4640.00 rows=50000 width=6) (actual time=150.113..211.340 rows=86860 loops=1)\n Output: dupes.ctid\n Filter: (NOT (hashed SubPlan 1))\n Buffers: shared hit=1240\n SubPlan 1\n -> HashAggregate (cost=2620.00..2745.00 rows=10000 width=18) (actual time=115.739..143.004 rows=13140 loops=1)\n Output: min(b.id), b.first_name, b.last_name\n Filter: (count(*) > 1)\n Buffers: shared hit=620\n -> Seq Scan on public.dupes b (cost=0.00..1620.00 rows=100000 width=18) (actual time=0.006..15.563 rows=100000 loops=1)\n Output: b.id, b.first_name, b.last_name\n Buffers: shared hit=620\nTotal runtime: 301.241 ms\n\nNow assuming I do not have a unique value in the table. In that case I would revert to using the ctid to identify individual rows:\n\nDELETE FROM dupes\nWHERE ctid NOT IN (SELECT min(b.ctid)\n FROM dupes b\n GROUP BY first_name, last_Name\n HAVING count(*) > 1);\n\nWhich has a completely different execution plan:\n\nDelete on public.dupes (cost=2620.00..10004490.00 rows=50000 width=6) (actual time=269966.623..269966.623 rows=0 loops=1)\n Buffers: shared hit=88720\n -> Seq Scan on public.dupes (cost=2620.00..10004490.00 rows=50000 width=6) (actual time=176.107..269582.651 rows=86860 loops=1)\n Output: dupes.ctid\n Filter: (NOT (SubPlan 1))\n Buffers: shared hit=1240\n SubPlan 1\n -> Materialize (cost=2620.00..2795.00 rows=10000 width=20) (actual time=0.002..0.799 rows=12277 loops=100000)\n Output: (min(b.ctid)), b.first_name, b.last_name\n Buffers: shared hit=620\n -> HashAggregate (cost=2620.00..2745.00 rows=10000 width=20) (actual time=131.162..164.941 rows=13140 loops=1)\n Output: min(b.ctid), b.first_name, b.last_name\n Filter: (count(*) > 1)\n Buffers: shared hit=620\n -> Seq Scan on public.dupes b (cost=0.00..1620.00 rows=100000 width=20) (actual time=0.005..29.531 rows=100000 loops=1)\n Output: b.ctid, b.first_name, b.last_name\n Buffers: shared hit=620\nTotal runtime: 269968.515 ms\n\nThis is Postgres 9.1.4 64bit on Windows 7\n\nWhy does the usage of the CTID column change the plan so drastically?\n\nRegards\nThomas\n\n",
"msg_date": "Tue, 24 Jul 2012 12:13:09 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using ctid column changes plan drastically"
},
{
"msg_contents": "Thomas Kellerer <[email protected]> writes:\n> DELETE FROM dupes\n> WHERE id NOT IN (SELECT min(b.id)\n> FROM dupes b\n> GROUP BY first_name, last_Name\n> HAVING count(*) > 1);\n\nDoesn't that kill the non-duplicates too?\n\n> Why does the usage of the CTID column change the plan so drastically?\n\nIIRC, type tid doesn't have any hash support.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Jul 2012 10:23:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using ctid column changes plan drastically"
},
{
"msg_contents": "Tom Lane, 24.07.2012 16:23:\n> Thomas Kellerer <[email protected]> writes:\n>> DELETE FROM dupes\n>> WHERE id NOT IN (SELECT min(b.id)\n>> FROM dupes b\n>> GROUP BY first_name, last_Name\n>> HAVING count(*) > 1);\n>\n> Doesn't that kill the non-duplicates too?\n\nAh right - another good point on how important the correct test data is ;)\n\n>> Why does the usage of the CTID column change the plan so drastically?\n>\n> IIRC, type tid doesn't have any hash support.\n>\n\nSo the \"bad\" plan is expected?\n\nRegards\nThomas\n\n\n\n",
"msg_date": "Tue, 24 Jul 2012 16:54:18 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using ctid column changes plan drastically"
},
{
"msg_contents": "Thomas Kellerer <[email protected]> writes:\n> Tom Lane, 24.07.2012 16:23:\n>> IIRC, type tid doesn't have any hash support.\n\n> So the \"bad\" plan is expected?\n\nJoins on tid columns just aren't supported very well at the moment.\nPartly that's from lack of round tuits, and partly it's because it\ndoesn't seem all that wise to encourage people to use them. There\nare gotchas if any of the rows receive concurrent updates.\n\nFWIW, it might be helpful to cast this as a NOT EXISTS rather than\nNOT IN subquery.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Jul 2012 11:55:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using ctid column changes plan drastically"
},
{
"msg_contents": "Tom Lane wrote on 24.07.2012 17:55:\n> Joins on tid columns just aren't supported very well at the moment.\n> Partly that's from lack of round tuits, and partly it's because it\n> doesn't seem all that wise to encourage people to use them. There\n> are gotchas if any of the rows receive concurrent updates.\n\nThanks for the clarification. I will keep that in mind.\n\n> FWIW, it might be helpful to cast this as a NOT EXISTS rather than\n> NOT IN subquery.\n\nHmm. How would you change that into an NOT EXISTS clause (so that one of the duplicates remains)\nEverything I come up with is in fact slower than the NOT IN solution.\n\nRegards\nThomas\n\n\n\n",
"msg_date": "Tue, 24 Jul 2012 18:32:09 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using ctid column changes plan drastically"
},
{
"msg_contents": "Thomas Kellerer <[email protected]> writes:\n> Tom Lane wrote on 24.07.2012 17:55:\n>> FWIW, it might be helpful to cast this as a NOT EXISTS rather than\n>> NOT IN subquery.\n\n> Hmm. How would you change that into an NOT EXISTS clause (so that one of the duplicates remains)\n> Everything I come up with is in fact slower than the NOT IN solution.\n\nWell, it would only help if you're running a PG version that's new\nenough to recognize the NOT EXISTS as an anti-join; and even then,\nit's possible that joining on a tid column forecloses enough plan\ntypes that you don't get any real benefit. But I'm just guessing.\nCan you show exactly what you tried and what EXPLAIN ANALYZE results\nyou got?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Jul 2012 13:12:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using ctid column changes plan drastically"
},
{
"msg_contents": "Tom Lane, 24.07.2012 19:12:\n> Well, it would only help if you're running a PG version that's new\n> enough to recognize the NOT EXISTS as an anti-join; and even then,\n> it's possible that joining on a tid column forecloses enough plan\n> types that you don't get any real benefit. But I'm just guessing.\n> Can you show exactly what you tried and what EXPLAIN ANALYZE results\n> you got?\n>\n\nI am using 9.1.4 (as I said in my initial post).\n\nI finally found a solution that runs fine:\n\nDELETE FROM dupes a\nWHERE EXISTS (SELECT 1\n FROM dupes b\n WHERE b.first_name = a.first_name\n AND b.last_name = a.last_name\n AND b.ctid > a.ctid);\n\nThe execution plan for this is:\n\nDelete on public.dupes a (cost=14575.95..16978.87 rows=25000 width=12) (actual time=2419.334..2419.334 rows=0 loops=1)\n Buffers: shared hit=18029\n -> Merge Semi Join (cost=14575.95..16978.87 rows=25000 width=12) (actual time=2043.674..2392.707 rows=17097 loops=1)\n Output: a.ctid, b.ctid\n Merge Cond: ((a.first_name = b.first_name) AND (a.last_name = b.last_name))\n Join Filter: (b.ctid > a.ctid)\n Buffers: shared hit=930\n -> Sort (cost=7287.98..7475.48 rows=75000 width=20) (actual time=1024.195..1030.051 rows=75000 loops=1)\n Output: a.ctid, a.first_name, a.last_name\n Sort Key: a.first_name, a.last_name\n Sort Method: quicksort Memory: 8870kB\n Buffers: shared hit=465\n -> Seq Scan on public.dupes a (cost=0.00..1215.00 rows=75000 width=20) (actual time=0.025..23.234 rows=75000 loops=1)\n Output: a.ctid, a.first_name, a.last_name\n Buffers: shared hit=465\n -> Sort (cost=7287.98..7475.48 rows=75000 width=20) (actual time=1019.148..1028.483 rows=105841 loops=1)\n Output: b.ctid, b.first_name, b.last_name\n Sort Key: b.first_name, b.last_name\n Sort Method: quicksort Memory: 8870kB\n Buffers: shared hit=465\n -> Seq Scan on public.dupes b (cost=0.00..1215.00 rows=75000 width=20) (actual time=0.017..19.133 rows=75000 loops=1)\n Output: b.ctid, b.first_name, b.last_name\n Buffers: shared hit=465\nTotal runtime: 2420.953 ms\n\nWhich is a lot better than the plan using \"WHERE ctid NOT IN (.....)\":\n\nDelete on public.dupes (cost=1777.50..4925055.00 rows=37500 width=6) (actual time=582515.094..582515.094 rows=0 loops=1)\n Buffers: shared hit=18027\n -> Seq Scan on public.dupes (cost=1777.50..4925055.00 rows=37500 width=6) (actual time=1038.164..582332.927 rows=17097 loops=1)\n Output: dupes.ctid\n Filter: (NOT (SubPlan 1))\n Buffers: shared hit=930\n SubPlan 1\n -> Materialize (cost=1777.50..1890.00 rows=7500 width=20) (actual time=0.001..2.283 rows=35552 loops=75000)\n Output: (min(b.ctid)), b.first_name, b.last_name\n Buffers: shared hit=465\n -> HashAggregate (cost=1777.50..1852.50 rows=7500 width=20) (actual time=90.964..120.228 rows=57903 loops=1)\n Output: min(b.ctid), b.first_name, b.last_name\n Buffers: shared hit=465\n -> Seq Scan on public.dupes b (cost=0.00..1215.00 rows=75000 width=20) (actual time=0.008..25.515 rows=75000 loops=1)\n Output: b.ctid, b.first_name, b.last_name\n Buffers: shared hit=465\nTotal runtime: 582517.711 ms\n\nUsing \"WHERE id NOT IN (...)\" is the fastest way:\n\nDelete on public.dupes (cost=1871.25..3273.75 rows=37500 width=6) (actual time=187.949..187.949 rows=0 loops=1)\n Buffers: shared hit=18490\n -> Seq Scan on public.dupes (cost=1871.25..3273.75 rows=37500 width=6) (actual time=125.351..171.108 rows=17097 loops=1)\n Output: dupes.ctid\n Filter: (NOT (hashed SubPlan 1))\n Buffers: shared hit=930\n SubPlan 1\n -> HashAggregate (cost=1777.50..1852.50 rows=7500 width=18) (actual time=73.131..93.421 rows=57903 loops=1)\n Output: min(b.id), b.first_name, b.last_name\n Buffers: shared hit=465\n -> Seq Scan on public.dupes b (cost=0.00..1215.00 rows=75000 width=18) (actual time=0.004..8.515 rows=75000 loops=1)\n Output: b.id, b.first_name, b.last_name\n Buffers: shared hit=465\nTotal runtime: 189.222 ms\n\nRegards\nThomas\n\n\n",
"msg_date": "Wed, 25 Jul 2012 10:10:13 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using ctid column changes plan drastically"
},
{
"msg_contents": "Thomas Kellerer <[email protected]> wrote:\n \n> I finally found a solution that runs fine:\n> \n> DELETE FROM dupes a\n> WHERE EXISTS (SELECT 1\n> FROM dupes b\n> WHERE b.first_name = a.first_name\n> AND b.last_name = a.last_name\n> AND b.ctid > a.ctid);\n \nHow does performance for that compare to?:\n \nCREATE TABLE nodupes AS\n SELECT DISTINCT ON (last_name, first_name) * FROM dupes\n ORDER BY last_name, first_name, ctid;\n \n-Kevin\n",
"msg_date": "Wed, 01 Aug 2012 11:42:28 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using ctid column changes plan drastically"
}
] |
[
{
"msg_contents": "I partitioned a table, but didn't find any improvement in query timing.\n\nThe basic table was like as follows :-\n\n\\d table1\n> Table \"public.table1_old\"\n> Column | Type | Modifiers\n> --------------+-----------------------------+--------------------\n> crmid | integer | not null\n> smcreatorid | integer | not null default 0\n> smownerid | integer | not null default 0\n> modifiedby | integer | not null default 0\n> module | character varying(30) | not null\n> description | text |\n> createdtime | timestamp without time zone | not null\n> modifiedtime | timestamp without time zone | not null\n> viewedtime | timestamp without time zone |\n> status | character varying(50) |\n> version | integer | not null default 0\n> presence | integer | default 1\n> deleted | integer | not null default 0\n> Indexes:\n> \"table1_pkey\" PRIMARY KEY, btree (crmid)\n> \"table1_createdtime_idx\" btree (createdtime)\n> \"table1_modifiedby_idx\" btree (modifiedby)\n> \"table1_modifiedtime_idx\" btree (modifiedtime)\n> \"table1_module_idx\" btree (module) WHERE deleted = 0\n> \"table1_smcreatorid_idx\" btree (smcreatorid)\n> \"table1_smownerid_idx\" btree (smownerid)\n> \"ftx_en_table1_description\" gin (to_tsvector('vcrm_en'::regconfig,\n> for_fts(description)))\n> \"table1_deleted_idx\" btree (deleted)\n\n\n\n\\d table2\n> Table \"public.table2\"\n> Column | Type |\n> Modifiers\n>\n> -------------------------+------------------------+-------------------------------------------\n> table2id | integer | not null default 0\n> subject | character varying(250) | not null\n> semodule | character varying(20) |\n> table2type | character varying(200) | not null\n> date_start | date | not null\n> due_date | date |\n> time_start | character varying(50) |\n> time_end | character varying(50) |\n> sendnotification | character varying(3) | not null default\n> '0'::character varying\n> duration_hours | character varying(2) |\n> duration_minutes | character varying(200) |\n> status | character varying(200) |\n> eventstatus | character varying(200) |\n> priority | character varying(200) |\n> location | character varying(150) |\n> notime | character varying(3) | not null default\n> '0'::character varying\n> visibility | character varying(50) | not null default\n> 'all'::character varying\n> recurringtype | character varying(200) |\n> end_date | date |\n> end_time | character varying(50) |\n> duration_seconds | integer | not null default 0\n> phone | character varying(100) |\n> vip_name | character varying(200) |\n> is_offline_call | smallint | default 0\n> campaign_id | bigint |\n> table2_classification | character varying(255) |\n> Indexes:\n> \"table2_pkey\" PRIMARY KEY, btree (table2id)\n> \"table2_table2type_idx\" btree (table2type)\n> \"table2_date_start_idx\" btree (date_start)\n> \"table2_due_date_idx\" btree (due_date)\n> \"table2_eventstatus_idx\" btree (eventstatus)\n> \"table2_status_idx\" btree (status)\n> \"table2_subject_idx\" btree (subject)\n> \"table2_time_start_idx\" btree (time_start)\n> \"ftx_en_table2_subject\" gin (to_tsvector('vcrm_en'::regconfig,\n> for_fts(subject::text)))\n\n\n\nAs most of the queries were executed based on module.\n\nselect module,count(*) from table1 group by module;\n> module | count\n> -----------------------+--------\n> Leads | 463237\n> Calendar | 431041\n> Accounts | 304225\n> Contacts | 299211\n> Emails | 199876\n> HelpDesk | 135977\n> Potentials | 30826\n> Emails Attachment | 28249\n> Notes | 1029\n> Accounts Attachment | 1015\n\n\n\nI paritioned the table based on module. And created index on each separate\ntables.\nAfter parition the table structure as follows :-\n\n\\d+ table1\n> Table \"public.table1\"\n> Column | Type | Modifiers | Storage\n> | Description\n>\n> --------------+-----------------------------+--------------------+----------+-------------\n> crmid | integer | not null | plain\n> |\n> smcreatorid | integer | not null default 0 | plain\n> |\n> smownerid | integer | not null default 0 | plain\n> |\n> modifiedby | integer | not null default 0 | plain\n> |\n> module | character varying(30) | not null |\n> extended |\n> description | text | |\n> extended |\n> createdtime | timestamp without time zone | not null | plain\n> |\n> modifiedtime | timestamp without time zone | not null | plain\n> |\n> viewedtime | timestamp without time zone | | plain\n> |\n> status | character varying(50) | |\n> extended |\n> version | integer | not null default 0 | plain\n> |\n> presence | integer | default 1 | plain\n> |\n> deleted | integer | not null default 0 | plain\n> |\n> Indexes:\n> \"table1_pkey1\" PRIMARY KEY, btree (crmid)\n> Child tables: table1_accounts,\n> table1_calendar,\n> table1_emails,\n> table1_helpdesk,\n> table1_leads,\n> table1_others\n> Has OIDs: no\n\n\n\n\n*Without parition :-*\n\nexplain analyze\n> select *\n> from table1 as c\n> inner join table2 as a on c.crmid = a.table2id and deleted = 0\n> where module ='Leads'\n> ;\n>\n> QUERY PLAN\n>\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=93557.89..160291.06 rows=112087 width=506) (actual\n> time=4013.152..4013.152 rows=0 loops=1)\n> Hash Cond: (a.table2id = c.crmid)\n> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139)\n> (actual time=0.028..834.189 rows=681434 loops=1)\n> -> Hash (cost=73716.32..73716.32 rows=328765 width=367) (actual\n> time=1620.810..1620.810 rows=287365 loops=1)\n> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n> -> Bitmap Heap Scan on table1 c (cost=9489.85..73716.32\n> rows=328765 width=367) (actual time=83.092..1144.159 rows=287365 loops=1)\n> Recheck Cond: (((module)::text = 'Leads'::text) AND\n> (deleted = 0))\n> -> Bitmap Index Scan on table1_module_idx\n> (cost=0.00..9407.66 rows=328765 width=0) (actual time=79.232..79.232\n> rows=287365 loops=1)\n> Index Cond: ((module)::text = 'Leads'::text)\n> Total runtime: 4013.932 ms\n> (10 rows)\n\n\n\n*With Parition :- *\n\n\n\n>\n> explain analyze\n>> select *\n>> from table1 as c\n>> inner join table2 as a on c.crmid = a.table2id and deleted = 0\n>> where module ='Leads';\n>>\n>> QUERY PLAN\n>>\n>>\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual\n>> time=8430.588..8430.588 rows=0 loops=1)\n>> Hash Cond: (a.table2id = c.crmid)\n>> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139)\n>> (actual time=0.054..870.554 rows=681434 loops=1)\n>> -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual\n>> time=2751.950..2751.950 rows=287365 loops=1)\n>> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n>> -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual\n>> time=0.034..2304.191 rows=287365 loops=1)\n>> -> Seq Scan on table1 c (cost=0.00..89187.53 rows=313255\n>> width=367) (actual time=0.032..1783.075 rows=287365 loops=1)\n>> Filter: ((deleted = 0) AND ((module)::text =\n>> 'Leads'::text))\n>> -> Index Scan using table1_leads_deleted_idx on\n>> table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual\n>> time=0.010..0.010 rows=0 loops=1)\n>> Index Cond: (deleted = 0)\n>> Filter: ((module)::text = 'Leads'::text)\n>> Total runtime: 8432.024 ms\n>> (12 rows)\n>\n>\nI set constraint_exclusion to partition.\n\nWhy do I need more time with parition?\nAny experts please let me know.\n\nI partitioned a table, but didn't find any improvement in query timing.The basic table was like as follows :- \n\\d table1 Table \"public.table1_old\" Column | Type | Modifiers --------------+-----------------------------+-------------------- crmid | integer | not null\n smcreatorid | integer | not null default 0 smownerid | integer | not null default 0 modifiedby | integer | not null default 0 module | character varying(30) | not null\n description | text | createdtime | timestamp without time zone | not null modifiedtime | timestamp without time zone | not null viewedtime | timestamp without time zone | status | character varying(50) | \n version | integer | not null default 0 presence | integer | default 1 deleted | integer | not null default 0Indexes: \"table1_pkey\" PRIMARY KEY, btree (crmid)\n \"table1_createdtime_idx\" btree (createdtime) \"table1_modifiedby_idx\" btree (modifiedby) \"table1_modifiedtime_idx\" btree (modifiedtime) \"table1_module_idx\" btree (module) WHERE deleted = 0\n \"table1_smcreatorid_idx\" btree (smcreatorid) \"table1_smownerid_idx\" btree (smownerid) \"ftx_en_table1_description\" gin (to_tsvector('vcrm_en'::regconfig, for_fts(description)))\n \"table1_deleted_idx\" btree (deleted)\n\\d table2 Table \"public.table2\" Column | Type | Modifiers -------------------------+------------------------+-------------------------------------------\n table2id | integer | not null default 0 subject | character varying(250) | not null semodule | character varying(20) | table2type | character varying(200) | not null\n date_start | date | not null due_date | date | time_start | character varying(50) | time_end | character varying(50) | \n sendnotification | character varying(3) | not null default '0'::character varying duration_hours | character varying(2) | duration_minutes | character varying(200) | status | character varying(200) | \n eventstatus | character varying(200) | priority | character varying(200) | location | character varying(150) | notime | character varying(3) | not null default '0'::character varying\n visibility | character varying(50) | not null default 'all'::character varying recurringtype | character varying(200) | end_date | date | end_time | character varying(50) | \n duration_seconds | integer | not null default 0 phone | character varying(100) | vip_name | character varying(200) | is_offline_call | smallint | default 0\n campaign_id | bigint | table2_classification | character varying(255) | Indexes: \"table2_pkey\" PRIMARY KEY, btree (table2id) \"table2_table2type_idx\" btree (table2type)\n \"table2_date_start_idx\" btree (date_start) \"table2_due_date_idx\" btree (due_date) \"table2_eventstatus_idx\" btree (eventstatus) \"table2_status_idx\" btree (status)\n \"table2_subject_idx\" btree (subject) \"table2_time_start_idx\" btree (time_start) \"ftx_en_table2_subject\" gin (to_tsvector('vcrm_en'::regconfig, for_fts(subject::text)))\nAs most of the queries were executed based on module.\nselect module,count(*) from table1 group by module; module | count -----------------------+-------- Leads | 463237 Calendar | 431041 Accounts | 304225\n Contacts | 299211 Emails | 199876 HelpDesk | 135977 Potentials | 30826 Emails Attachment | 28249 Notes | 1029 Accounts Attachment | 1015\nI paritioned the table based on module. And created index on each separate tables.After parition the table structure as follows :- \n\\d+ table1 Table \"public.table1\" Column | Type | Modifiers | Storage | Description --------------+-----------------------------+--------------------+----------+-------------\n crmid | integer | not null | plain | smcreatorid | integer | not null default 0 | plain | smownerid | integer | not null default 0 | plain | \n modifiedby | integer | not null default 0 | plain | module | character varying(30) | not null | extended | description | text | | extended | \n createdtime | timestamp without time zone | not null | plain | modifiedtime | timestamp without time zone | not null | plain | viewedtime | timestamp without time zone | | plain | \n status | character varying(50) | | extended | version | integer | not null default 0 | plain | presence | integer | default 1 | plain | \n deleted | integer | not null default 0 | plain | Indexes: \"table1_pkey1\" PRIMARY KEY, btree (crmid)Child tables: table1_accounts, table1_calendar,\n table1_emails, table1_helpdesk, table1_leads, table1_othersHas OIDs: noWithout parition :-\nexplain analyzeselect * from table1 as c\ninner join table2 as a on c.crmid = a.table2id and deleted = 0where module ='Leads'; QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=93557.89..160291.06 rows=112087 width=506) (actual time=4013.152..4013.152 rows=0 loops=1)\n Hash Cond: (a.table2id = c.crmid) -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.028..834.189 rows=681434 loops=1) -> Hash (cost=73716.32..73716.32 rows=328765 width=367) (actual time=1620.810..1620.810 rows=287365 loops=1)\n Buckets: 1024 Batches: 128 Memory Usage: 226kB -> Bitmap Heap Scan on table1 c (cost=9489.85..73716.32 rows=328765 width=367) (actual time=83.092..1144.159 rows=287365 loops=1) Recheck Cond: (((module)::text = 'Leads'::text) AND (deleted = 0))\n -> Bitmap Index Scan on table1_module_idx (cost=0.00..9407.66 rows=328765 width=0) (actual time=79.232..79.232 rows=287365 loops=1) Index Cond: ((module)::text = 'Leads'::text)\n Total runtime: 4013.932 ms(10 rows)\nWith Parition :- \nexplain analyzeselect * from table1 as cinner join table2 as a on c.crmid = a.table2id and deleted = 0where module ='Leads'; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual time=8430.588..8430.588 rows=0 loops=1)\n Hash Cond: (a.table2id = c.crmid) -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.054..870.554 rows=681434 loops=1) -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual time=2751.950..2751.950 rows=287365 loops=1)\n Buckets: 1024 Batches: 128 Memory Usage: 226kB -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual time=0.034..2304.191 rows=287365 loops=1) -> Seq Scan on table1 c (cost=0.00..89187.53 rows=313255 width=367) (actual time=0.032..1783.075 rows=287365 loops=1)\n Filter: ((deleted = 0) AND ((module)::text = 'Leads'::text)) -> Index Scan using table1_leads_deleted_idx on table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual time=0.010..0.010 rows=0 loops=1)\n Index Cond: (deleted = 0) Filter: ((module)::text = 'Leads'::text) Total runtime: 8432.024 ms(12 rows)I set constraint_exclusion to partition.\nWhy do I need more time with parition?Any experts please let me know.",
"msg_date": "Tue, 24 Jul 2012 16:42:34 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why do I need more time with partition table?"
},
{
"msg_contents": "In addition to the previous mail, I am adding here that -\nMy Postgresql version is 9.1.2.\n\nAnd one more thing, executing the following query I got two query plan\nwhere the second one looked strange to me.\nIf showed to take 20950.579 ms, but investigating both the plan I found\nthat it took less time in every step of second plan.\n\nexplain analyze\n> select *\n> from table1 as c\n> inner join table2 as a on c.crmid = a.activityid and deleted = 0\n> where module ='Leads';\n>\n> QUERY PLAN\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual\n> time=5194.683..5194.683 rows=0 loops=1)\n> Hash Cond: (a.activityid = c.crmid)\n> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139)\n> (actual time=0.062..823.380 rows=681434 loops=1)\n> -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual\n> time=2813.000..2813.000 rows=287365 loops=1)\n> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n> -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual\n> time=0.062..2352.646 rows=287365 loops=1)\n> -> Seq Scan on table1 c (cost=0.00..89187.53 rows=313255\n> width=367) (actual time=0.060..1820.331 rows=287365 loops=1)\n> Filter: ((deleted = 0) AND ((module)::text =\n> 'Leads'::text))\n> -> Index Scan using crmentity_leads_deleted_idx on\n> table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual\n> time=11.076..11.076 rows=0 loops=1)\n> Index Cond: (deleted = 0)\n> Filter: ((module)::text = 'Leads'::text)\n> Total runtime: 5195.117 ms\n> (12 rows)\n>\n\nExecuting the query again -\n\n*\\g*\n>\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual\n> time=20950.161..20950.161 rows=0 loops=1)\n> Hash Cond: (a.activityid = c.crmid)\n> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139)\n> (actual time=0.092..835.241 rows=681434 loops=1)\n> -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual\n> time=2774.250..2774.250 rows=287365 loops=1)\n> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n> -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual\n> time=0.061..2318.759 rows=287365 loops=1)\n> -> Seq Scan on table1 c (cost=0.00..89187.53 rows=313255\n> width=367) (actual time=0.059..1799.937 rows=287365 loops=1)\n> Filter: ((deleted = 0) AND ((module)::text =\n> 'Leads'::text))\n> -> Index Scan using crmentity_leads_deleted_idx on\n> table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual\n> time=0.011..0.011 rows=0 loops=1)\n> Index Cond: (deleted = 0)\n> Filter: ((module)::text = 'Leads'::text)\n> Total runtime: 20950.579 ms\n> (12 rows)\n>\n\nOn Tue, Jul 24, 2012 at 4:42 PM, AI Rumman <[email protected]> wrote:\n\n> I partitioned a table, but didn't find any improvement in query timing.\n>\n> The basic table was like as follows :-\n>\n> \\d table1\n>> Table \"public.table1_old\"\n>> Column | Type | Modifiers\n>> --------------+-----------------------------+--------------------\n>> crmid | integer | not null\n>> smcreatorid | integer | not null default 0\n>> smownerid | integer | not null default 0\n>> modifiedby | integer | not null default 0\n>> module | character varying(30) | not null\n>> description | text |\n>> createdtime | timestamp without time zone | not null\n>> modifiedtime | timestamp without time zone | not null\n>> viewedtime | timestamp without time zone |\n>> status | character varying(50) |\n>> version | integer | not null default 0\n>> presence | integer | default 1\n>> deleted | integer | not null default 0\n>> Indexes:\n>> \"table1_pkey\" PRIMARY KEY, btree (crmid)\n>> \"table1_createdtime_idx\" btree (createdtime)\n>> \"table1_modifiedby_idx\" btree (modifiedby)\n>> \"table1_modifiedtime_idx\" btree (modifiedtime)\n>> \"table1_module_idx\" btree (module) WHERE deleted = 0\n>> \"table1_smcreatorid_idx\" btree (smcreatorid)\n>> \"table1_smownerid_idx\" btree (smownerid)\n>> \"ftx_en_table1_description\" gin (to_tsvector('vcrm_en'::regconfig,\n>> for_fts(description)))\n>> \"table1_deleted_idx\" btree (deleted)\n>\n>\n>\n> \\d table2\n>> Table \"public.table2\"\n>> Column | Type |\n>> Modifiers\n>>\n>> -------------------------+------------------------+-------------------------------------------\n>> table2id | integer | not null default 0\n>> subject | character varying(250) | not null\n>> semodule | character varying(20) |\n>> table2type | character varying(200) | not null\n>> date_start | date | not null\n>> due_date | date |\n>> time_start | character varying(50) |\n>> time_end | character varying(50) |\n>> sendnotification | character varying(3) | not null default\n>> '0'::character varying\n>> duration_hours | character varying(2) |\n>> duration_minutes | character varying(200) |\n>> status | character varying(200) |\n>> eventstatus | character varying(200) |\n>> priority | character varying(200) |\n>> location | character varying(150) |\n>> notime | character varying(3) | not null default\n>> '0'::character varying\n>> visibility | character varying(50) | not null default\n>> 'all'::character varying\n>> recurringtype | character varying(200) |\n>> end_date | date |\n>> end_time | character varying(50) |\n>> duration_seconds | integer | not null default 0\n>> phone | character varying(100) |\n>> vip_name | character varying(200) |\n>> is_offline_call | smallint | default 0\n>> campaign_id | bigint |\n>> table2_classification | character varying(255) |\n>> Indexes:\n>> \"table2_pkey\" PRIMARY KEY, btree (table2id)\n>> \"table2_table2type_idx\" btree (table2type)\n>> \"table2_date_start_idx\" btree (date_start)\n>> \"table2_due_date_idx\" btree (due_date)\n>> \"table2_eventstatus_idx\" btree (eventstatus)\n>> \"table2_status_idx\" btree (status)\n>> \"table2_subject_idx\" btree (subject)\n>> \"table2_time_start_idx\" btree (time_start)\n>> \"ftx_en_table2_subject\" gin (to_tsvector('vcrm_en'::regconfig,\n>> for_fts(subject::text)))\n>\n>\n>\n> As most of the queries were executed based on module.\n>\n> select module,count(*) from table1 group by module;\n>> module | count\n>> -----------------------+--------\n>> Leads | 463237\n>> Calendar | 431041\n>> Accounts | 304225\n>> Contacts | 299211\n>> Emails | 199876\n>> HelpDesk | 135977\n>> Potentials | 30826\n>> Emails Attachment | 28249\n>> Notes | 1029\n>> Accounts Attachment | 1015\n>\n>\n>\n> I paritioned the table based on module. And created index on each separate\n> tables.\n> After parition the table structure as follows :-\n>\n> \\d+ table1\n>> Table \"public.table1\"\n>> Column | Type | Modifiers |\n>> Storage | Description\n>>\n>> --------------+-----------------------------+--------------------+----------+-------------\n>> crmid | integer | not null | plain\n>> |\n>> smcreatorid | integer | not null default 0 | plain\n>> |\n>> smownerid | integer | not null default 0 | plain\n>> |\n>> modifiedby | integer | not null default 0 | plain\n>> |\n>> module | character varying(30) | not null |\n>> extended |\n>> description | text | |\n>> extended |\n>> createdtime | timestamp without time zone | not null | plain\n>> |\n>> modifiedtime | timestamp without time zone | not null | plain\n>> |\n>> viewedtime | timestamp without time zone | | plain\n>> |\n>> status | character varying(50) | |\n>> extended |\n>> version | integer | not null default 0 | plain\n>> |\n>> presence | integer | default 1 | plain\n>> |\n>> deleted | integer | not null default 0 | plain\n>> |\n>> Indexes:\n>> \"table1_pkey1\" PRIMARY KEY, btree (crmid)\n>> Child tables: table1_accounts,\n>> table1_calendar,\n>> table1_emails,\n>> table1_helpdesk,\n>> table1_leads,\n>> table1_others\n>> Has OIDs: no\n>\n>\n>\n>\n> *Without parition :-*\n>\n> explain analyze\n>> select *\n>> from table1 as c\n>> inner join table2 as a on c.crmid = a.table2id and deleted = 0\n>> where module ='Leads'\n>> ;\n>>\n>> QUERY PLAN\n>>\n>>\n>> -----------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Hash Join (cost=93557.89..160291.06 rows=112087 width=506) (actual\n>> time=4013.152..4013.152 rows=0 loops=1)\n>> Hash Cond: (a.table2id = c.crmid)\n>> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139)\n>> (actual time=0.028..834.189 rows=681434 loops=1)\n>> -> Hash (cost=73716.32..73716.32 rows=328765 width=367) (actual\n>> time=1620.810..1620.810 rows=287365 loops=1)\n>> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n>> -> Bitmap Heap Scan on table1 c (cost=9489.85..73716.32\n>> rows=328765 width=367) (actual time=83.092..1144.159 rows=287365 loops=1)\n>> Recheck Cond: (((module)::text = 'Leads'::text) AND\n>> (deleted = 0))\n>> -> Bitmap Index Scan on table1_module_idx\n>> (cost=0.00..9407.66 rows=328765 width=0) (actual time=79.232..79.232\n>> rows=287365 loops=1)\n>> Index Cond: ((module)::text = 'Leads'::text)\n>> Total runtime: 4013.932 ms\n>> (10 rows)\n>\n>\n>\n> *With Parition :- *\n>\n>\n>\n>>\n>> explain analyze\n>>> select *\n>>> from table1 as c\n>>> inner join table2 as a on c.crmid = a.table2id and deleted = 0\n>>> where module ='Leads';\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual\n>>> time=8430.588..8430.588 rows=0 loops=1)\n>>> Hash Cond: (a.table2id = c.crmid)\n>>> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139)\n>>> (actual time=0.054..870.554 rows=681434 loops=1)\n>>> -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual\n>>> time=2751.950..2751.950 rows=287365 loops=1)\n>>> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n>>> -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual\n>>> time=0.034..2304.191 rows=287365 loops=1)\n>>> -> Seq Scan on table1 c (cost=0.00..89187.53\n>>> rows=313255 width=367) (actual time=0.032..1783.075 rows=287365 loops=1)\n>>> Filter: ((deleted = 0) AND ((module)::text =\n>>> 'Leads'::text))\n>>> -> Index Scan using table1_leads_deleted_idx on\n>>> table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual\n>>> time=0.010..0.010 rows=0 loops=1)\n>>> Index Cond: (deleted = 0)\n>>> Filter: ((module)::text = 'Leads'::text)\n>>> Total runtime: 8432.024 ms\n>>> (12 rows)\n>>\n>>\n> I set constraint_exclusion to partition.\n>\n> Why do I need more time with parition?\n> Any experts please let me know.\n>\n\nIn addition to the previous mail, I am adding here that -My Postgresql version is 9.1.2.And one more thing, executing the following query I got two query plan where the second one looked strange to me.\nIf showed to take 20950.579 ms, but investigating both the plan I found that it took less time in every step of second plan.\nexplain analyzeselect * from table1 as cinner join table2 as a on c.crmid = a.activityid and deleted = 0where module ='Leads'; QUERY PLAN \n ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual time=5194.683..5194.683 rows=0 loops=1)\n Hash Cond: (a.activityid = c.crmid) -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.062..823.380 rows=681434 loops=1) -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual time=2813.000..2813.000 rows=287365 loops=1)\n Buckets: 1024 Batches: 128 Memory Usage: 226kB -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual time=0.062..2352.646 rows=287365 loops=1) -> Seq Scan on table1 c (cost=0.00..89187.53 rows=313255 width=367) (actual time=0.060..1820.331 rows=287365 loops=1)\n Filter: ((deleted = 0) AND ((module)::text = 'Leads'::text)) -> Index Scan using crmentity_leads_deleted_idx on table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual time=11.076..11.076 rows=0 loops=1)\n Index Cond: (deleted = 0) Filter: ((module)::text = 'Leads'::text) Total runtime: 5195.117 ms(12 rows) Executing the query again -\n\\g QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual time=20950.161..20950.161 rows=0 loops=1)\n Hash Cond: (a.activityid = c.crmid) -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.092..835.241 rows=681434 loops=1) -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual time=2774.250..2774.250 rows=287365 loops=1)\n Buckets: 1024 Batches: 128 Memory Usage: 226kB -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual time=0.061..2318.759 rows=287365 loops=1) -> Seq Scan on table1 c (cost=0.00..89187.53 rows=313255 width=367) (actual time=0.059..1799.937 rows=287365 loops=1)\n Filter: ((deleted = 0) AND ((module)::text = 'Leads'::text)) -> Index Scan using crmentity_leads_deleted_idx on table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual time=0.011..0.011 rows=0 loops=1)\n Index Cond: (deleted = 0) Filter: ((module)::text = 'Leads'::text) Total runtime: 20950.579 ms(12 rows)On Tue, Jul 24, 2012 at 4:42 PM, AI Rumman <[email protected]> wrote:\nI partitioned a table, but didn't find any improvement in query timing.The basic table was like as follows :- \n\n\\d table1 Table \"public.table1_old\" Column | Type | Modifiers --------------+-----------------------------+-------------------- crmid | integer | not null\n\n smcreatorid | integer | not null default 0 smownerid | integer | not null default 0 modifiedby | integer | not null default 0 module | character varying(30) | not null\n\n description | text | createdtime | timestamp without time zone | not null modifiedtime | timestamp without time zone | not null viewedtime | timestamp without time zone | status | character varying(50) | \n\n version | integer | not null default 0 presence | integer | default 1 deleted | integer | not null default 0Indexes: \"table1_pkey\" PRIMARY KEY, btree (crmid)\n\n \"table1_createdtime_idx\" btree (createdtime) \"table1_modifiedby_idx\" btree (modifiedby) \"table1_modifiedtime_idx\" btree (modifiedtime) \"table1_module_idx\" btree (module) WHERE deleted = 0\n\n \"table1_smcreatorid_idx\" btree (smcreatorid) \"table1_smownerid_idx\" btree (smownerid) \"ftx_en_table1_description\" gin (to_tsvector('vcrm_en'::regconfig, for_fts(description)))\n\n \"table1_deleted_idx\" btree (deleted)\n\n\\d table2 Table \"public.table2\" Column | Type | Modifiers -------------------------+------------------------+-------------------------------------------\n\n table2id | integer | not null default 0 subject | character varying(250) | not null semodule | character varying(20) | table2type | character varying(200) | not null\n\n date_start | date | not null due_date | date | time_start | character varying(50) | time_end | character varying(50) | \n\n sendnotification | character varying(3) | not null default '0'::character varying duration_hours | character varying(2) | duration_minutes | character varying(200) | status | character varying(200) | \n\n eventstatus | character varying(200) | priority | character varying(200) | location | character varying(150) | notime | character varying(3) | not null default '0'::character varying\n\n visibility | character varying(50) | not null default 'all'::character varying recurringtype | character varying(200) | end_date | date | end_time | character varying(50) | \n\n duration_seconds | integer | not null default 0 phone | character varying(100) | vip_name | character varying(200) | is_offline_call | smallint | default 0\n\n campaign_id | bigint | table2_classification | character varying(255) | Indexes: \"table2_pkey\" PRIMARY KEY, btree (table2id) \"table2_table2type_idx\" btree (table2type)\n\n \"table2_date_start_idx\" btree (date_start) \"table2_due_date_idx\" btree (due_date) \"table2_eventstatus_idx\" btree (eventstatus) \"table2_status_idx\" btree (status)\n\n \"table2_subject_idx\" btree (subject) \"table2_time_start_idx\" btree (time_start) \"ftx_en_table2_subject\" gin (to_tsvector('vcrm_en'::regconfig, for_fts(subject::text)))\nAs most of the queries were executed based on module.\n\nselect module,count(*) from table1 group by module; module | count -----------------------+-------- Leads | 463237 Calendar | 431041 Accounts | 304225\n\n Contacts | 299211 Emails | 199876 HelpDesk | 135977 Potentials | 30826 Emails Attachment | 28249 Notes | 1029 Accounts Attachment | 1015\nI paritioned the table based on module. And created index on each separate tables.After parition the table structure as follows :- \n\n\\d+ table1 Table \"public.table1\" Column | Type | Modifiers | Storage | Description --------------+-----------------------------+--------------------+----------+-------------\n\n crmid | integer | not null | plain | smcreatorid | integer | not null default 0 | plain | smownerid | integer | not null default 0 | plain | \n\n modifiedby | integer | not null default 0 | plain | module | character varying(30) | not null | extended | description | text | | extended | \n\n createdtime | timestamp without time zone | not null | plain | modifiedtime | timestamp without time zone | not null | plain | viewedtime | timestamp without time zone | | plain | \n\n status | character varying(50) | | extended | version | integer | not null default 0 | plain | presence | integer | default 1 | plain | \n\n deleted | integer | not null default 0 | plain | Indexes: \"table1_pkey1\" PRIMARY KEY, btree (crmid)Child tables: table1_accounts, table1_calendar,\n\n table1_emails, table1_helpdesk, table1_leads, table1_othersHas OIDs: noWithout parition :-\nexplain analyzeselect * from table1 as c\n\ninner join table2 as a on c.crmid = a.table2id and deleted = 0where module ='Leads'; QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=93557.89..160291.06 rows=112087 width=506) (actual time=4013.152..4013.152 rows=0 loops=1)\n\n Hash Cond: (a.table2id = c.crmid) -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.028..834.189 rows=681434 loops=1) -> Hash (cost=73716.32..73716.32 rows=328765 width=367) (actual time=1620.810..1620.810 rows=287365 loops=1)\n\n Buckets: 1024 Batches: 128 Memory Usage: 226kB -> Bitmap Heap Scan on table1 c (cost=9489.85..73716.32 rows=328765 width=367) (actual time=83.092..1144.159 rows=287365 loops=1) Recheck Cond: (((module)::text = 'Leads'::text) AND (deleted = 0))\n\n -> Bitmap Index Scan on table1_module_idx (cost=0.00..9407.66 rows=328765 width=0) (actual time=79.232..79.232 rows=287365 loops=1) Index Cond: ((module)::text = 'Leads'::text)\n\n Total runtime: 4013.932 ms(10 rows)\nWith Parition :- \n\nexplain analyzeselect * from table1 as cinner join table2 as a on c.crmid = a.table2id and deleted = 0where module ='Leads'; QUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual time=8430.588..8430.588 rows=0 loops=1)\n\n Hash Cond: (a.table2id = c.crmid) -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.054..870.554 rows=681434 loops=1) -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual time=2751.950..2751.950 rows=287365 loops=1)\n\n Buckets: 1024 Batches: 128 Memory Usage: 226kB -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual time=0.034..2304.191 rows=287365 loops=1) -> Seq Scan on table1 c (cost=0.00..89187.53 rows=313255 width=367) (actual time=0.032..1783.075 rows=287365 loops=1)\n\n Filter: ((deleted = 0) AND ((module)::text = 'Leads'::text)) -> Index Scan using table1_leads_deleted_idx on table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual time=0.010..0.010 rows=0 loops=1)\n\n Index Cond: (deleted = 0) Filter: ((module)::text = 'Leads'::text) Total runtime: 8432.024 ms(12 rows)I set constraint_exclusion to partition.\nWhy do I need more time with parition?Any experts please let me know.",
"msg_date": "Tue, 24 Jul 2012 17:35:54 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do I need more time with partition table?"
},
{
"msg_contents": "hi al,\n\n> With Parition :- \n> \n> \n> explain analyze\n> select * \n> from table1 as c\n> inner join table2 as a on c.crmid = a.table2id and deleted = 0\n> where module ='Leads';\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual time=8430.588..8430.588 rows=0 loops=1)\n> Hash Cond: (a.table2id = c.crmid)\n> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.054..870.554 rows=681434 loops=1)\n> -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual time=2751.950..2751.950 rows=287365 loops=1)\n> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n> -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual time=0.034..2304.191 rows=287365 loops=1)\n> -> Seq Scan on table1 c (cost=0.00..89187.53 rows=313255 width=367) (actual time=0.032..1783.075 rows=287365 loops=1)\n> Filter: ((deleted = 0) AND ((module)::text = 'Leads'::text))\n> -> Index Scan using table1_leads_deleted_idx on table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual time=0.010..0.010 rows=0 loops=1)\n> Index Cond: (deleted = 0)\n> Filter: ((module)::text = 'Leads'::text)\n> Total runtime: 8432.024 ms\n> (12 rows)\n> \n> I set constraint_exclusion to partition.\n> \n> Why do I need more time with parition?\n\nit looks like you don't moved your data from base-table to your partitions.\n\nregards, jan\n\n",
"msg_date": "Tue, 24 Jul 2012 13:46:28 +0200",
"msg_from": "Jan Otto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do I need more time with partition table?"
},
{
"msg_contents": "Thanks. I missed to add the trigger.\nNow I added it, but still without partition taking less time compared to\nwith partition query.\n\n*With partition :- *\n\nexplain analyze\n> select *\n> from table1 as c\n> inner join table2 as a on c.crmid = a.activityid and deleted = 0\n> where module ='Leads'\n> ;\n>\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=25669.79..86440.88 rows=288058 width=367) (actual\n> time=4411.734..4411.734 rows=0 loops=1)\n> Hash Cond: (a.activityid = c.crmid)\n> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139)\n> (actual time=0.264..1336.555 rows=681434 loops=1)\n> -> Hash (cost=13207.07..13207.07 rows=288058 width=228) (actual\n> time=1457.495..1457.495 rows=287365 loops=1)\n> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n> -> Append (cost=0.00..13207.07 rows=288058 width=228) (actual\n> time=0.014..1000.182 rows=287365 loops=1)\n> -> Seq Scan on table1 c (cost=0.00..0.00 rows=1\n> width=367) (actual time=0.001..0.001 rows=0 loops=1)\n> Filter: ((deleted = 0) AND ((module)::text =\n> 'Leads'::text))\n> -> Seq Scan on table1_leads c (cost=0.00..13207.07\n> rows=288057 width=228) (actual time=0.010..490.169 rows=287365 loops=1)\n> Filter: ((deleted = 0) AND ((module)::text =\n> 'Leads'::text))\n> Total runtime: 4412.534 ms\n> (11 rows)\n\n\n*Without partition :- *\n\nexplain analyze\n> select *\n> from table1_old as c\n> inner join table2 as a on c.crmid = a.activityid and deleted = 0\n> where module ='Leads'\n> ;\n>\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=92095.07..157111.03 rows=107445 width=502) (actual\n> time=3795.273..3795.273 rows=0 loops=1)\n> Hash Cond: (a.activityid = c.crmid)\n> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139)\n> (actual time=0.030..812.925 rows=681434 loops=1)\n> -> Hash (cost=73246.44..73246.44 rows=314850 width=363) (actual\n> time=1377.624..1377.624 rows=287365 loops=1)\n> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n> -> Bitmap Heap Scan on table1_old c (cost=9228.69..73246.44\n> rows=314850 width=363) (actual time=83.189..926.542 rows=287365 loops=1)\n> Recheck Cond: (((module)::text = 'Leads'::text) AND\n> (deleted = 0))\n> -> Bitmap Index Scan on crmentity_module_idx\n> (cost=0.00..9149.98 rows=314850 width=0) (actual time=79.357..79.357\n> rows=287365 loops=1)\n> Index Cond: ((module)::text = 'Leads'::text)\n> Total runtime: 3795.721 ms\n> (10 rows)\n\n\n\nOn Tue, Jul 24, 2012 at 5:46 PM, Jan Otto <[email protected]> wrote:\n\n> hi al,\n>\n> > With Parition :-\n> >\n> >\n> > explain analyze\n> > select *\n> > from table1 as c\n> > inner join table2 as a on c.crmid = a.table2id and deleted = 0\n> > where module ='Leads';\n> >\n> QUERY PLAN\n> >\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual\n> time=8430.588..8430.588 rows=0 loops=1)\n> > Hash Cond: (a.table2id = c.crmid)\n> > -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139)\n> (actual time=0.054..870.554 rows=681434 loops=1)\n> > -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual\n> time=2751.950..2751.950 rows=287365 loops=1)\n> > Buckets: 1024 Batches: 128 Memory Usage: 226kB\n> > -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual\n> time=0.034..2304.191 rows=287365 loops=1)\n> > -> Seq Scan on table1 c (cost=0.00..89187.53\n> rows=313255 width=367) (actual time=0.032..1783.075 rows=287365 loops=1)\n> > Filter: ((deleted = 0) AND ((module)::text =\n> 'Leads'::text))\n> > -> Index Scan using table1_leads_deleted_idx on\n> table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual\n> time=0.010..0.010 rows=0 loops=1)\n> > Index Cond: (deleted = 0)\n> > Filter: ((module)::text = 'Leads'::text)\n> > Total runtime: 8432.024 ms\n> > (12 rows)\n> >\n> > I set constraint_exclusion to partition.\n> >\n> > Why do I need more time with parition?\n>\n> it looks like you don't moved your data from base-table to your partitions.\n>\n> regards, jan\n>\n>\n\nThanks. I missed to add the trigger.Now I added it, but still without partition taking less time compared to with partition query.With partition :- \nexplain analyzeselect * from table1 as cinner join table2 as a on c.crmid = a.activityid and deleted = 0where module ='Leads'; QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------ Hash Join (cost=25669.79..86440.88 rows=288058 width=367) (actual time=4411.734..4411.734 rows=0 loops=1)\n Hash Cond: (a.activityid = c.crmid) -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.264..1336.555 rows=681434 loops=1) -> Hash (cost=13207.07..13207.07 rows=288058 width=228) (actual time=1457.495..1457.495 rows=287365 loops=1)\n Buckets: 1024 Batches: 128 Memory Usage: 226kB -> Append (cost=0.00..13207.07 rows=288058 width=228) (actual time=0.014..1000.182 rows=287365 loops=1) -> Seq Scan on table1 c (cost=0.00..0.00 rows=1 width=367) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((deleted = 0) AND ((module)::text = 'Leads'::text)) -> Seq Scan on table1_leads c (cost=0.00..13207.07 rows=288057 width=228) (actual time=0.010..490.169 rows=287365 loops=1)\n Filter: ((deleted = 0) AND ((module)::text = 'Leads'::text)) Total runtime: 4412.534 ms(11 rows)Without partition :- \nexplain analyzeselect * from table1_old as cinner join table2 as a on c.crmid = a.activityid and deleted = 0where module ='Leads'; QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=92095.07..157111.03 rows=107445 width=502) (actual time=3795.273..3795.273 rows=0 loops=1)\n Hash Cond: (a.activityid = c.crmid) -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.030..812.925 rows=681434 loops=1) -> Hash (cost=73246.44..73246.44 rows=314850 width=363) (actual time=1377.624..1377.624 rows=287365 loops=1)\n Buckets: 1024 Batches: 128 Memory Usage: 226kB -> Bitmap Heap Scan on table1_old c (cost=9228.69..73246.44 rows=314850 width=363) (actual time=83.189..926.542 rows=287365 loops=1) Recheck Cond: (((module)::text = 'Leads'::text) AND (deleted = 0))\n -> Bitmap Index Scan on crmentity_module_idx (cost=0.00..9149.98 rows=314850 width=0) (actual time=79.357..79.357 rows=287365 loops=1) Index Cond: ((module)::text = 'Leads'::text)\n Total runtime: 3795.721 ms(10 rows)On Tue, Jul 24, 2012 at 5:46 PM, Jan Otto <[email protected]> wrote:\nhi al,\n\n> With Parition :-\n>\n>\n> explain analyze\n> select *\n> from table1 as c\n> inner join table2 as a on c.crmid = a.table2id and deleted = 0\n> where module ='Leads';\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=108101.50..175252.57 rows=313256 width=506) (actual time=8430.588..8430.588 rows=0 loops=1)\n> Hash Cond: (a.table2id = c.crmid)\n> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.054..870.554 rows=681434 loops=1)\n> -> Hash (cost=89195.80..89195.80 rows=313256 width=367) (actual time=2751.950..2751.950 rows=287365 loops=1)\n> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n> -> Append (cost=0.00..89195.80 rows=313256 width=367) (actual time=0.034..2304.191 rows=287365 loops=1)\n> -> Seq Scan on table1 c (cost=0.00..89187.53 rows=313255 width=367) (actual time=0.032..1783.075 rows=287365 loops=1)\n> Filter: ((deleted = 0) AND ((module)::text = 'Leads'::text))\n> -> Index Scan using table1_leads_deleted_idx on table1_leads c (cost=0.00..8.27 rows=1 width=280) (actual time=0.010..0.010 rows=0 loops=1)\n> Index Cond: (deleted = 0)\n> Filter: ((module)::text = 'Leads'::text)\n> Total runtime: 8432.024 ms\n> (12 rows)\n>\n> I set constraint_exclusion to partition.\n>\n> Why do I need more time with parition?\n\nit looks like you don't moved your data from base-table to your partitions.\n\nregards, jan",
"msg_date": "Wed, 25 Jul 2012 14:40:33 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why do I need more time with partition table?"
},
{
"msg_contents": "hi al,\n\nOn Jul 25, 2012, at 10:40 AM, AI Rumman <[email protected]> wrote:\n\n> Thanks. I missed to add the trigger.\n> Now I added it, but still without partition taking less time compared to with partition query.\n> \n> With partition :- \n> \n> explain analyze\n> select * \n> from table1 as c\n> inner join table2 as a on c.crmid = a.activityid and deleted = 0\n> where module ='Leads'\n> ;\n> \n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=25669.79..86440.88 rows=288058 width=367) (actual time=4411.734..4411.734 rows=0 loops=1)\n> Hash Cond: (a.activityid = c.crmid)\n> -> Seq Scan on table2 a (cost=0.00..18337.34 rows=681434 width=139) (actual time=0.264..1336.555 rows=681434 loops=1)\n> -> Hash (cost=13207.07..13207.07 rows=288058 width=228) (actual time=1457.495..1457.495 rows=287365 loops=1)\n> Buckets: 1024 Batches: 128 Memory Usage: 226kB\n> -> Append (cost=0.00..13207.07 rows=288058 width=228) (actual time=0.014..1000.182 rows=287365 loops=1)\n> -> Seq Scan on table1 c (cost=0.00..0.00 rows=1 width=367) (actual time=0.001..0.001 rows=0 loops=1)\n> Filter: ((deleted = 0) AND ((module)::text = 'Leads'::text))\n> -> Seq Scan on table1_leads c (cost=0.00..13207.07 rows=288057 width=228) (actual time=0.010..490.169 rows=287365 loops=1)\n> Filter: ((deleted = 0) AND ((module)::text = 'Leads'::text))\n> Total runtime: 4412.534 ms\n> (11 rows)\n\ndid you have analyze'd your tables? try if indexing column deleted on table1_leads gives you some more speed.\n\nregards, jan\n",
"msg_date": "Wed, 25 Jul 2012 16:42:24 +0200",
"msg_from": "Jan Otto <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do I need more time with partition table?"
},
{
"msg_contents": "On Wed, Jul 25, 2012 at 1:40 AM, AI Rumman <[email protected]> wrote:\n> Thanks. I missed to add the trigger.\n> Now I added it, but still without partition taking less time compared to\n> with partition query.\n\nBased on the different times on \"Seq Scan on table2\", it looks like\none query has better caching than the other.\n\nDid you try running the queries in alternating order, to average out\ncaching effects?\n\nCould you run the \"explain (analyze, buffers)\" on those to get a\nbetter picture of the buffer effects?\n\nCheers,\n\nJeff\n",
"msg_date": "Wed, 25 Jul 2012 07:55:55 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why do I need more time with partition table?"
}
] |
[
{
"msg_contents": "Hi,\n\nIn statistical reports gathered by PgBadger on our PostgreSQL databases\nalmost always we have in \"Queries that took up the most time\" report table\ninformation about transactions start time ('BEGIN;' command). Something\nlike that in example below:\n\n2 3h34m52.26s 48,556,167 0.00s BEGIN;\n\n 0.82s | BEGIN;\n 0.82s | BEGIN;\n 0.82s | BEGIN;\n 0.81s | BEGIN;\n 0.81s | BEGIN;\n 0.81s | BEGIN;\n 0.80s | BEGIN;\n 0.80s | BEGIN;\n 0.79s | BEGIN;\n 0.79s | BEGIN;\n\nDatabases placed on different hardware, OS - Debian GNU/Linux, PostgreSQL\n9.1\n\nSo, questions are:\n1. Is this a normal situation with transactions start time ( BEGIN method) ?\n2. How can we reduce transactions start time if it's possible in principle?\n3. What happens in PostgreSQL on transaction starting time? Can someone\ndescribe this process in detail? (of course, I saw in PostgreSQL source\ncode, for example, definition such kind functions, like StartTransaction\nfunction, but it's not so easy to understand for third-party researcher,\nthat all of these operations mean in real for performance)\n\nBest Regards\n\nAleksei\n\nHi,In statistical reports gathered by PgBadger on our PostgreSQL databases almost always we have in \"Queries that took up the most time\" report table information about transactions start time ('BEGIN;' command). Something like that in example below:\n2 3h34m52.26s 48,556,167 0.00s BEGIN; 0.82s | BEGIN; 0.82s | BEGIN; 0.82s | BEGIN; 0.81s | BEGIN;\n 0.81s | BEGIN; 0.81s | BEGIN; 0.80s | BEGIN; 0.80s | BEGIN; 0.79s | BEGIN; 0.79s | BEGIN;\nDatabases placed on different hardware, OS - Debian GNU/Linux, PostgreSQL 9.1So, questions are: 1. Is this a normal situation with transactions start time ( BEGIN method) ?2. How can we reduce transactions start time if it's possible in principle?\n3. What happens in PostgreSQL on transaction starting time? Can someone describe this process in detail? (of course, I saw in PostgreSQL source code, for example, definition such kind functions, like StartTransaction function, but it's not so easy to understand for third-party researcher, that all of these operations mean in real for performance)\nBest RegardsAleksei",
"msg_date": "Tue, 24 Jul 2012 14:14:35 +0300",
"msg_from": "Aleksei Arefjev <[email protected]>",
"msg_from_op": true,
"msg_subject": "transactions start time"
},
{
"msg_contents": "On 24/07/12 12:14, Aleksei Arefjev wrote:\n> Hi,\n>\n> In statistical reports gathered by PgBadger on our PostgreSQL databases\n> almost always we have in \"Queries that took up the most time\" report\n> table information about transactions start time ('BEGIN;' command).\n> Something like that in example below:\n>\n> 2 3h34m52.26s 48,556,167 0.00s BEGIN;\n>\n> 0.82s | BEGIN;\n> 0.82s | BEGIN;\n> 0.82s | BEGIN;\n> 0.81s | BEGIN;\n> 0.81s | BEGIN;\n> 0.81s | BEGIN;\n> 0.80s | BEGIN;\n> 0.80s | BEGIN;\n> 0.79s | BEGIN;\n> 0.79s | BEGIN;\n\nI'm not sure if I'm reading this right, but are there more than 48 \nmillion BEGINs that took 0s each (presumably rounded down) and then a \nhandful taking about 0.8s?\n\nIf so, then it's likely nothing to do with the BEGIN and just that the \nmachine was busy doing other things when you started a transaction.\n\n> Databases placed on different hardware, OS - Debian GNU/Linux,\n> PostgreSQL 9.1\n>\n> So, questions are:\n> 1. Is this a normal situation with transactions start time ( BEGIN method) ?\n\nSee above\n\n> 2. How can we reduce transactions start time if it's possible in principle?\n\nBelow 0.00? Probably not\n\n> 3. What happens in PostgreSQL on transaction starting time? Can someone\n> describe this process in detail? (of course, I saw in PostgreSQL source\n> code, for example, definition such kind functions, like StartTransaction\n> function, but it's not so easy to understand for third-party researcher,\n> that all of these operations mean in real for performance)\n\nWell there are two important things to understand:\n1. All* commands run in a transaction\n2. I think most of the work in getting a new snapshot etc gets pushed \nback until it's needed.\n\nSo - the overall impact of issuing BEGIN should be close to zero.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 24 Jul 2012 18:21:38 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: transactions start time"
},
{
"msg_contents": "On 24 July 2012 20:21, Richard Huxton <[email protected]> wrote:\n\n> On 24/07/12 12:14, Aleksei Arefjev wrote:\n>\n>> Hi,\n>>\n>> In statistical reports gathered by PgBadger on our PostgreSQL databases\n>> almost always we have in \"Queries that took up the most time\" report\n>> table information about transactions start time ('BEGIN;' command).\n>> Something like that in example below:\n>>\n>> 2 3h34m52.26s 48,556,167 0.00s BEGIN;\n>>\n>> 0.82s | BEGIN;\n>> 0.82s | BEGIN;\n>> 0.82s | BEGIN;\n>> 0.81s | BEGIN;\n>> 0.81s | BEGIN;\n>> 0.81s | BEGIN;\n>> 0.80s | BEGIN;\n>> 0.80s | BEGIN;\n>> 0.79s | BEGIN;\n>> 0.79s | BEGIN;\n>>\n>\n> I'm not sure if I'm reading this right, but are there more than 48 million\n> BEGINs that took 0s each (presumably rounded down) and then a handful\n> taking about 0.8s?\n>\n\n0.00s - this is the average duration parameter column. Them, seems, much\nmore, and those were shown like examples.\n\n\n\n> If so, then it's likely nothing to do with the BEGIN and just that the\n> machine was busy doing other things when you started a transaction.\n\n\nPerhaps so, but, at execution time, there were not any problem with\nperformance on those machines.\n\n\n>\n>\n> Databases placed on different hardware, OS - Debian GNU/Linux,\n>> PostgreSQL 9.1\n>>\n>> So, questions are:\n>> 1. Is this a normal situation with transactions start time ( BEGIN\n>> method) ?\n>>\n>\n> See above\n>\n>\n> 2. How can we reduce transactions start time if it's possible in\n>> principle?\n>>\n>\n> Below 0.00? Probably not\n>\n>\n> 3. What happens in PostgreSQL on transaction starting time? Can someone\n>> describe this process in detail? (of course, I saw in PostgreSQL source\n>> code, for example, definition such kind functions, like StartTransaction\n>> function, but it's not so easy to understand for third-party researcher,\n>> that all of these operations mean in real for performance)\n>>\n>\n> Well there are two important things to understand:\n> 1. All* commands run in a transaction\n>\n\nYes, I know it.\n\n\n> 2. I think most of the work in getting a new snapshot etc gets pushed back\n> until it's needed.\n>\n\nProbably so, but I wanna know, is there any opportunity to optimize this\nprocess.\n\n\n>\n> So - the overall impact of issuing BEGIN should be close to zero.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n\nAnd yet, repeating the question: What happens in PostgreSQL on transaction\nstarting time? Can someone\ndescribe this process in detail?\n\nRegards\n\nAleksei\n\nOn 24 July 2012 20:21, Richard Huxton <[email protected]> wrote:\nOn 24/07/12 12:14, Aleksei Arefjev wrote:\n\nHi,\n\nIn statistical reports gathered by PgBadger on our PostgreSQL databases\nalmost always we have in \"Queries that took up the most time\" report\ntable information about transactions start time ('BEGIN;' command).\nSomething like that in example below:\n\n2 3h34m52.26s 48,556,167 0.00s BEGIN;\n\n 0.82s | BEGIN;\n 0.82s | BEGIN;\n 0.82s | BEGIN;\n 0.81s | BEGIN;\n 0.81s | BEGIN;\n 0.81s | BEGIN;\n 0.80s | BEGIN;\n 0.80s | BEGIN;\n 0.79s | BEGIN;\n 0.79s | BEGIN;\n\n\nI'm not sure if I'm reading this right, but are there more than 48 million BEGINs that took 0s each (presumably rounded down) and then a handful taking about 0.8s?0.00s - this is the average duration parameter column. Them, seems, much more, and those were shown like examples.\n \nIf so, then it's likely nothing to do with the BEGIN and just that the machine was busy doing other things when you started a transaction.Perhaps so, but, at execution time, there were not any problem with performance on those machines.\n \n\n\nDatabases placed on different hardware, OS - Debian GNU/Linux,\nPostgreSQL 9.1\n\nSo, questions are:\n1. Is this a normal situation with transactions start time ( BEGIN method) ?\n\n\nSee above\n\n\n2. How can we reduce transactions start time if it's possible in principle?\n\n\nBelow 0.00? Probably not\n\n\n3. What happens in PostgreSQL on transaction starting time? Can someone\ndescribe this process in detail? (of course, I saw in PostgreSQL source\ncode, for example, definition such kind functions, like StartTransaction\nfunction, but it's not so easy to understand for third-party researcher,\nthat all of these operations mean in real for performance)\n\n\nWell there are two important things to understand:\n1. All* commands run in a transactionYes, I know it. \n2. I think most of the work in getting a new snapshot etc gets pushed back until it's needed.Probably so, but I wanna know, is there any opportunity to optimize this process. \n\nSo - the overall impact of issuing BEGIN should be close to zero.\n\n-- \n Richard Huxton\n Archonet Ltd\nAnd yet, repeating the question: What happens in PostgreSQL on transaction starting time? Can someone\n\ndescribe this process in detail?RegardsAleksei",
"msg_date": "Wed, 25 Jul 2012 09:52:35 +0300",
"msg_from": "Aleksei Arefjev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: transactions start time"
},
{
"msg_contents": "On 24 July 2012 20:21, Richard Huxton <[email protected]> wrote:\n\n> On 24/07/12 12:14, Aleksei Arefjev wrote:\n>\n>> Hi,\n>>\n>> In statistical reports gathered by PgBadger on our PostgreSQL databases\n>> almost always we have in \"Queries that took up the most time\" report\n>> table information about transactions start time ('BEGIN;' command).\n>> Something like that in example below:\n>>\n>> 2 3h34m52.26s 48,556,167 0.00s BEGIN;\n>>\n>> 0.82s | BEGIN;\n>> 0.82s | BEGIN;\n>> 0.82s | BEGIN;\n>> 0.81s | BEGIN;\n>> 0.81s | BEGIN;\n>> 0.81s | BEGIN;\n>> 0.80s | BEGIN;\n>> 0.80s | BEGIN;\n>> 0.79s | BEGIN;\n>> 0.79s | BEGIN;\n>>\n>\n> I'm not sure if I'm reading this right, but are there more than 48 million\n> BEGINs that took 0s each (presumably rounded down) and then a handful\n> taking about 0.8s?\n>\n> If so, then it's likely nothing to do with the BEGIN and just that the\n> machine was busy doing other things when you started a transaction.\n>\n>\n> Databases placed on different hardware, OS - Debian GNU/Linux,\n>> PostgreSQL 9.1\n>>\n>> So, questions are:\n>> 1. Is this a normal situation with transactions start time ( BEGIN\n>> method) ?\n>>\n>\n> See above\n>\n>\n> 2. How can we reduce transactions start time if it's possible in\n>> principle?\n>>\n>\n> Below 0.00? Probably not\n>\n>\n> 3. What happens in PostgreSQL on transaction starting time? Can someone\n>> describe this process in detail? (of course, I saw in PostgreSQL source\n>> code, for example, definition such kind functions, like StartTransaction\n>> function, but it's not so easy to understand for third-party researcher,\n>> that all of these operations mean in real for performance)\n>>\n>\n> Well there are two important things to understand:\n> 1. All* commands run in a transaction\n> 2. I think most of the work in getting a new snapshot etc gets pushed back\n> until it's needed.\n>\n\nIf so, maybe using of 'SET TRANSACTION SNAPSHOT' command with the\npre-existing transaction exported snapshot by the pg_export_snapshot\nfunction could be usefull for reducing transactions start time -\nhttp://www.postgresql.org/docs/9.2/static/sql-set-transaction.html\n\n\n>\n> So - the overall impact of issuing BEGIN should be close to zero.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\nOn 24 July 2012 20:21, Richard Huxton <[email protected]> wrote:\nOn 24/07/12 12:14, Aleksei Arefjev wrote:\n\nHi,\n\nIn statistical reports gathered by PgBadger on our PostgreSQL databases\nalmost always we have in \"Queries that took up the most time\" report\ntable information about transactions start time ('BEGIN;' command).\nSomething like that in example below:\n\n2 3h34m52.26s 48,556,167 0.00s BEGIN;\n\n 0.82s | BEGIN;\n 0.82s | BEGIN;\n 0.82s | BEGIN;\n 0.81s | BEGIN;\n 0.81s | BEGIN;\n 0.81s | BEGIN;\n 0.80s | BEGIN;\n 0.80s | BEGIN;\n 0.79s | BEGIN;\n 0.79s | BEGIN;\n\n\nI'm not sure if I'm reading this right, but are there more than 48 million BEGINs that took 0s each (presumably rounded down) and then a handful taking about 0.8s?\n\nIf so, then it's likely nothing to do with the BEGIN and just that the machine was busy doing other things when you started a transaction.\n\n\nDatabases placed on different hardware, OS - Debian GNU/Linux,\nPostgreSQL 9.1\n\nSo, questions are:\n1. Is this a normal situation with transactions start time ( BEGIN method) ?\n\n\nSee above\n\n\n2. How can we reduce transactions start time if it's possible in principle?\n\n\nBelow 0.00? Probably not\n\n\n3. What happens in PostgreSQL on transaction starting time? Can someone\ndescribe this process in detail? (of course, I saw in PostgreSQL source\ncode, for example, definition such kind functions, like StartTransaction\nfunction, but it's not so easy to understand for third-party researcher,\nthat all of these operations mean in real for performance)\n\n\nWell there are two important things to understand:\n1. All* commands run in a transaction\n2. I think most of the work in getting a new snapshot etc gets pushed back until it's needed.If so, maybe using of 'SET TRANSACTION SNAPSHOT' command with the pre-existing transaction exported snapshot by the pg_export_snapshot function could be usefull for reducing transactions start time -\nhttp://www.postgresql.org/docs/9.2/static/sql-set-transaction.html \n\nSo - the overall impact of issuing BEGIN should be close to zero.\n\n-- \n Richard Huxton\n Archonet Ltd",
"msg_date": "Wed, 25 Jul 2012 10:37:54 +0300",
"msg_from": "Aleksei Arefjev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: transactions start time"
},
{
"msg_contents": "Aleksei Arefjev <[email protected]> writes:\n> On 24 July 2012 20:21, Richard Huxton <[email protected]> wrote:\n>> I'm not sure if I'm reading this right, but are there more than 48 million\n>> BEGINs that took 0s each (presumably rounded down) and then a handful\n>> taking about 0.8s?\n\nI'm wondering exactly where/how the duration was measured. If it was at\na client, maybe the apparent delay had something to do with network\nglitches? It seems suspicious that all the outliers are around 0.8s.\nIt would be useful to look to see if there's any comparable pattern\nfor statements other than BEGIN.\n\nAs Richard says, a BEGIN by itself ought to take negligible time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Jul 2012 10:56:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: transactions start time"
},
{
"msg_contents": "Hi,\n\nOn Wednesday, July 25, 2012 04:56:20 PM Tom Lane wrote:\n> Aleksei Arefjev <[email protected]> writes:\n> > On 24 July 2012 20:21, Richard Huxton <[email protected]> wrote:\n> >> I'm not sure if I'm reading this right, but are there more than 48\n> >> million BEGINs that took 0s each (presumably rounded down) and then a\n> >> handful taking about 0.8s?\n> \n> I'm wondering exactly where/how the duration was measured. If it was at\n> a client, maybe the apparent delay had something to do with network\n> glitches? It seems suspicious that all the outliers are around 0.8s.\n> It would be useful to look to see if there's any comparable pattern\n> for statements other than BEGIN.\n> \n> As Richard says, a BEGIN by itself ought to take negligible time.\nHe earlier also asked on the IRC-Channel and I got the idea that the problem \ncould be explained by pgbouncer in transaction pooling mode waiting for a free \nbackend connection. Aleksei confirmed that they use pgbouncer in that \nconfiguration, so that might be it.\n\nAndres\n-- \nAndres Freund\t\thttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n",
"msg_date": "Wed, 25 Jul 2012 19:01:17 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: transactions start time"
}
] |
[
{
"msg_contents": "Hello,\n\nUnder FreeBSD 9, what filesystem should I use for PostgreSQL? (Dell \nPowerEdge 2900, 24G mem, 10x2T SATA2 disk, Intel RAID controller.)\n\n * ZFS is journaled, and it is more independent of the hardware. So if\n the computer goes wrong, I can move the zfs array to a different server.\n * UFS is not journaled. Also I have to rely on the RAID card to build\n the RAID array. If there is a hw problem with it, then I won't be\n able to recover the data easily.\n\nI wonder if UFS has better performance or not. Or can you suggest \nanother fs? Just of the PGDATA directory.\n\nThanks,\n\n Laszlo\n\n\n\n\n\n\n\n\n Hello,\n\n Under FreeBSD 9, what filesystem should I use for PostgreSQL? (Dell\n PowerEdge 2900, 24G mem, 10x2T SATA2 disk, Intel RAID controller.)\n\n\nZFS is journaled, and it is more independent of the hardware.\n So if the computer goes wrong, I can move the zfs array to a\n different server.\nUFS is not journaled. Also I have to rely on the RAID card to\n build the RAID array. If there is a hw problem with it, then I\n won't be able to recover the data easily.\n\nI wonder if UFS has better performance or not. Or can you suggest\n another fs? Just of the PGDATA directory.\n\nThanks,\n\n Laszlo",
"msg_date": "Tue, 24 Jul 2012 14:51:07 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "ZFS vs. UFS"
},
{
"msg_contents": "Hi.\n\nAs far as I know UFS is faster than ZFS on FreeBSD 9.0.\n\nSome users reported stability problem with ZFS on AMD64 and maybe UFS is \nbetter choice.\n\nBest regards\nGeorgi\n\nOn 07/24/2012 03:51 PM, Laszlo Nagy wrote:\n>\n> Hello,\n>\n> Under FreeBSD 9, what filesystem should I use for PostgreSQL? (Dell\n> PowerEdge 2900, 24G mem, 10x2T SATA2 disk, Intel RAID controller.)\n>\n> * ZFS is journaled, and it is more independent of the hardware. So\n> if the computer goes wrong, I can move the zfs array to a\n> different server.\n> * UFS is not journaled. Also I have to rely on the RAID card to\n> build the RAID array. If there is a hw problem with it, then I\n> won't be able to recover the data easily.\n>\n> I wonder if UFS has better performance or not. Or can you suggest\n> another fs? Just of the PGDATA directory.\n>\n> Thanks,\n>\n> Laszlo\n>\n",
"msg_date": "Tue, 24 Jul 2012 16:03:45 +0300",
"msg_from": "Georgi Naplatanov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ZFS vs. UFS"
},
{
"msg_contents": "On 24/07/2012 14:51, Laszlo Nagy wrote:\n> \n> Hello,\n> \n> Under FreeBSD 9, what filesystem should I use for PostgreSQL? (Dell\n> PowerEdge 2900, 24G mem, 10x2T SATA2 disk, Intel RAID controller.)\n> \n> * ZFS is journaled, and it is more independent of the hardware. So if\n> the computer goes wrong, I can move the zfs array to a different server.\n> * UFS is not journaled. Also I have to rely on the RAID card to build\n> the RAID array. If there is a hw problem with it, then I won't be\n> able to recover the data easily.\n> \n> I wonder if UFS has better performance or not. Or can you suggest\n> another fs? Just of the PGDATA directory.\n\nHi,\n\nI think you might actually get a bit more performance out of ZFS,\ndepending on your load, server configuration and (more so) the tuning of\nZFS... however UFS is IMO more stable so I use it more often. A hardware\nRAID card would be good to have, but you can use soft-RAID the usual way\nand not be locked-in by the controller.\n\nYou can activate softupdates-journalling on UFS if you really want it,\nbut I find that regular softupdates is perfectly fine for PostgreSQL,\nwhich has its own journalling.",
"msg_date": "Tue, 24 Jul 2012 15:18:29 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ZFS vs. UFS"
},
{
"msg_contents": "Hello,\nThe Postres 9.0 database we use gets about 20K inserts per minute. As \nlong as you don't query at the same time the database is copying fine. \nHowever long running queries seems to delay so much the db that the \napplication server buffers the incoming data as it cannot insert them \nfast enough. The server has 4 HD. One is used for archive, past static \ntables, the second is the index of the current live tables and the third \nis the current data. The fourth is the OS.\n\nThe serve specs are:\nIntel(R) Xeon(R) CPU W3520 @ 2.67GHz\n4 cores\n18GB Ram\n\nDo you think that this work load is high that requires an upgrade to \ncluster or RAID 10 to cope with it?\n\nKind Regards\nYiannis\n",
"msg_date": "Tue, 24 Jul 2012 14:22:34 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Heavy inserts load wile querying..."
},
{
"msg_contents": "On 24.07.2012 14:51, Laszlo Nagy wrote:\n\n> * UFS is not journaled.\n\nThere is journal support for UFS as far as i know. Please have a look at \nthe gjournal manpage.\n\nGreetings,\nTorsten\n",
"msg_date": "Tue, 24 Jul 2012 16:23:19 +0200",
"msg_from": "Torsten Zuehlsdorff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ZFS vs. UFS"
},
{
"msg_contents": "On Tue, Jul 24, 2012 at 6:22 AM, Ioannis Anagnostopoulos <[email protected]\n> wrote:\n\n> Hello,\n> The Postres 9.0 database we use gets about 20K inserts per minute. As long\n> as you don't query at the same time the database is copying fine. However\n> long running queries seems to delay so much the db that the application\n> server buffers the incoming data as it cannot insert them fast enough. The\n> server has 4 HD. One is used for archive, past static tables, the second is\n> the index of the current live tables and the third is the current data. The\n> fourth is the OS.\n>\n> The serve specs are:\n> Intel(R) Xeon(R) CPU W3520 @ 2.67GHz\n> 4 cores\n> 18GB Ram\n>\n> Do you think that this work load is high that requires an upgrade to\n> cluster or RAID 10 to cope with it?\n>\n\nYou need to learn more about what exactly is your bottleneck ... memory,\nCPU, or I/O. That said, I suspect you'd be way better off with this\nhardware if you built a single software RAID 10 array and put everything on\nit.\n\nRight now, the backup disk and the OS disk are sitting idle most of the\ntime. With a RAID10 array, you'd at least double, maybe quadruple your\nI/O. And if you added a battery-backed RAID controller, you'd have a\npretty fast system.\n\nCraig\n\n\n>\n> Kind Regards\n> Yiannis\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nOn Tue, Jul 24, 2012 at 6:22 AM, Ioannis Anagnostopoulos <[email protected]> wrote:\nHello,\nThe Postres 9.0 database we use gets about 20K inserts per minute. As long as you don't query at the same time the database is copying fine. However long running queries seems to delay so much the db that the application server buffers the incoming data as it cannot insert them fast enough. The server has 4 HD. One is used for archive, past static tables, the second is the index of the current live tables and the third is the current data. The fourth is the OS.\n\nThe serve specs are:\nIntel(R) Xeon(R) CPU W3520 @ 2.67GHz\n4 cores\n18GB Ram\n\nDo you think that this work load is high that requires an upgrade to cluster or RAID 10 to cope with it?You need to learn more about what exactly is your bottleneck ... memory, CPU, or I/O. That said, I suspect you'd be way better off with this hardware if you built a single software RAID 10 array and put everything on it.\nRight now, the backup disk and the OS disk are sitting idle most of the time. With a RAID10 array, you'd at least double, maybe quadruple your I/O. And if you added a battery-backed RAID controller, you'd have a pretty fast system.\nCraig \n\nKind Regards\nYiannis\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 24 Jul 2012 07:30:04 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy inserts load wile querying..."
},
{
"msg_contents": ">\n> On 24/07/2012 14:51, Laszlo Nagy wrote:\n> >\n> > Hello,\n> >\n> > Under FreeBSD 9, what filesystem should I use for PostgreSQL? (Dell\n> > PowerEdge 2900, 24G mem, 10x2T SATA2 disk, Intel RAID controller.)\n> >\n> > * ZFS is journaled, and it is more independent of the hardware. So if\n> > the computer goes wrong, I can move the zfs array to a different\n> server.\n> > * UFS is not journaled. Also I have to rely on the RAID card to build\n> > the RAID array. If there is a hw problem with it, then I won't be\n> > able to recover the data easily.\n> >\n> > I wonder if UFS has better performance or not. Or can you suggest\n> > another fs? Just of the PGDATA directory.\n>\n\nRelying on physically moving a disk isn't a good backup/recovery strategy.\nDisks are the least reliable single component in a modern computer. You\nshould figure out the best file system for your application, and separately\nfigure out a recovery strategy, one that can survive the failure of *any*\ncomponent in your system, including the disk itself.\n\nCraig\n\nOn 24/07/2012 14:51, Laszlo Nagy wrote:\n>\n> Hello,\n>\n> Under FreeBSD 9, what filesystem should I use for PostgreSQL? (Dell\n> PowerEdge 2900, 24G mem, 10x2T SATA2 disk, Intel RAID controller.)\n>\n> * ZFS is journaled, and it is more independent of the hardware. So if\n> the computer goes wrong, I can move the zfs array to a different server.\n> * UFS is not journaled. Also I have to rely on the RAID card to build\n> the RAID array. If there is a hw problem with it, then I won't be\n> able to recover the data easily.\n>\n> I wonder if UFS has better performance or not. Or can you suggest\n> another fs? Just of the PGDATA directory.Relying on physically moving a disk isn't a good backup/recovery strategy. Disks are the least reliable single component in a modern computer. You should figure out the best file system for your application, and separately figure out a recovery strategy, one that can survive the failure of *any* component in your system, including the disk itself.\nCraig",
"msg_date": "Tue, 24 Jul 2012 07:34:10 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ZFS vs. UFS"
},
{
"msg_contents": "On 24/07/2012 15:30, Craig James wrote:\n>\n>\n> On Tue, Jul 24, 2012 at 6:22 AM, Ioannis Anagnostopoulos \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Hello,\n> The Postres 9.0 database we use gets about 20K inserts per minute.\n> As long as you don't query at the same time the database is\n> copying fine. However long running queries seems to delay so much\n> the db that the application server buffers the incoming data as it\n> cannot insert them fast enough. The server has 4 HD. One is used\n> for archive, past static tables, the second is the index of the\n> current live tables and the third is the current data. The fourth\n> is the OS.\n>\n> The serve specs are:\n> Intel(R) Xeon(R) CPU W3520 @ 2.67GHz\n> 4 cores\n> 18GB Ram\n>\n> Do you think that this work load is high that requires an upgrade\n> to cluster or RAID 10 to cope with it?\n>\n>\n> You need to learn more about what exactly is your bottleneck ... \n> memory, CPU, or I/O. That said, I suspect you'd be way better off \n> with this hardware if you built a single software RAID 10 array and \n> put everything on it.\n>\n> Right now, the backup disk and the OS disk are sitting idle most of \n> the time. With a RAID10 array, you'd at least double, maybe quadruple \n> your I/O. And if you added a battery-backed RAID controller, you'd \n> have a pretty fast system.\n>\n> Craig\n>\n>\n> Kind Regards\n> Yiannis\n>\n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\nI can only assume that it is an i/o issue. At last this is what I can \nread from iostat:\n\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 277.50 0.00 20.00 0.00 2344.00 \n117.20 0.09 2.25 4.50 9.00\nsdb 1.00 0.50 207.50 4.50 45228.00 33.50 \n213.50 2.40 11.34 4.13 87.50\nsdc 0.00 0.00 29.50 0.00 4916.00 0.00 \n166.64 0.11 3.73 1.36 4.00\nsdd 0.00 0.00 4.00 179.50 96.00 3010.00 \n16.93 141.25 828.77 5.45 100.00\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 7.60 0.00 2.08 46.45 0.00 43.87\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 61.50 0.00 28.00 0.00 704.00 \n25.14 0.04 3.04 1.43 4.00\nsdb 2.00 0.00 90.50 162.00 19560.00 2992.00 \n89.31 78.92 194.26 3.76 95.00\nsdc 0.00 0.00 10.50 0.00 2160.00 0.00 \n205.71 0.02 1.90 1.90 2.00\nsdd 0.00 0.00 1.50 318.50 24.00 5347.00 \n16.78 134.72 572.81 3.12 100.00\n\nWhere sdb is the data disk and sdd is the index disk. \"Top\" hardly \nreports anything more than 10% per postgress process ever, while when \nthe query is running, these numbers on iostat are consistatnly high. At \nleast I can identify my buffering the moment that index hits 100% util. \nIs there any other way that I can identify bottlenecks in a more \npositive way?\n\n\n\n\n\n\n\nOn 24/07/2012 15:30, Craig James wrote:\n\n\n\nOn Tue, Jul 24, 2012 at 6:22 AM, Ioannis\n Anagnostopoulos <[email protected]>\n wrote:\n\n Hello,\n The Postres 9.0 database we use gets about 20K inserts per\n minute. As long as you don't query at the same time the\n database is copying fine. However long running queries seems\n to delay so much the db that the application server buffers\n the incoming data as it cannot insert them fast enough. The\n server has 4 HD. One is used for archive, past static tables,\n the second is the index of the current live tables and the\n third is the current data. The fourth is the OS.\n\n The serve specs are:\n Intel(R) Xeon(R) CPU W3520 @ 2.67GHz\n 4 cores\n 18GB Ram\n\n Do you think that this work load is high that requires an\n upgrade to cluster or RAID 10 to cope with it?\n\n\n You need to learn more about what exactly is your bottleneck\n ... memory, CPU, or I/O. That said, I suspect you'd be way\n better off with this hardware if you built a single software\n RAID 10 array and put everything on it.\n\n Right now, the backup disk and the OS disk are sitting idle\n most of the time. With a RAID10 array, you'd at least double,\n maybe quadruple your I/O. And if you added a battery-backed\n RAID controller, you'd have a pretty fast system.\n\n Craig\n \n\n\n\n Kind Regards\n Yiannis\n\n -- \n Sent via pgsql-performance mailing list ([email protected])\n To make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n I can only assume that it is an i/o issue. At last this is what I\n can read from iostat:\n\n\n Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n avgrq-sz avgqu-sz await svctm %util\n sda 0.00 277.50 0.00 20.00 0.00 2344.00 \n 117.20 0.09 2.25 4.50 9.00\n sdb 1.00 0.50 207.50 4.50 45228.00 33.50 \n 213.50 2.40 11.34 4.13 87.50\n sdc 0.00 0.00 29.50 0.00 4916.00 0.00 \n 166.64 0.11 3.73 1.36 4.00\n sdd 0.00 0.00 4.00 179.50 96.00 3010.00 \n 16.93 141.25 828.77 5.45 100.00\n\n avg-cpu: %user %nice %system %iowait %steal %idle\n 7.60 0.00 2.08 46.45 0.00 43.87\n\n Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s\n avgrq-sz avgqu-sz await svctm %util\n sda 0.00 61.50 0.00 28.00 0.00 704.00 \n 25.14 0.04 3.04 1.43 4.00\n sdb 2.00 0.00 90.50 162.00 19560.00 2992.00 \n 89.31 78.92 194.26 3.76 95.00\n sdc 0.00 0.00 10.50 0.00 2160.00 0.00 \n 205.71 0.02 1.90 1.90 2.00\n sdd 0.00 0.00 1.50 318.50 24.00 5347.00 \n 16.78 134.72 572.81 3.12 100.00\n\n Where sdb is the data disk and sdd is the index disk. \"Top\" hardly\n reports anything more than 10% per postgress process ever, while\n when the query is running, these numbers on iostat are consistatnly\n high. At least I can identify my buffering the moment that index\n hits 100% util. Is there any other way that I can identify\n bottlenecks in a more positive way?",
"msg_date": "Tue, 24 Jul 2012 15:42:07 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Heavy inserts load wile querying..."
},
{
"msg_contents": "> > I wonder if UFS has better performance or not. Or can you suggest\n> > another fs? Just of the PGDATA directory.\n>\n>\n> Relying on physically moving a disk isn't a good backup/recovery \n> strategy. Disks are the least reliable single component in a modern \n> computer. You should figure out the best file system for your \n> application, and separately figure out a recovery strategy, one that \n> can survive the failure of *any* component in your system, including \n> the disk itself.\nThis is why I use a RAID array of 10 disks. So there is no single point \nof failure. What else could I do? (Yes, I can make regular backups, but \nthat is not the same. I can still loose data...)\n\n\n\n\n\n\n\n\n\n\n > I wonder if UFS has better performance or not. Or can you\n suggest\n > another fs? Just of the PGDATA directory.\n\n\n\n Relying on physically moving a disk isn't a good backup/recovery\n strategy. Disks are the least reliable single component in a\n modern computer. You should figure out the best file system for\n your application, and separately figure out a recovery strategy,\n one that can survive the failure of *any* component in your\n system, including the disk itself.\n\n This is why I use a RAID array of 10 disks. So there is no single\n point of failure. What else could I do? (Yes, I can make regular\n backups, but that is not the same. I can still loose data...)",
"msg_date": "Tue, 24 Jul 2012 20:27:19 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ZFS vs. UFS"
},
{
"msg_contents": "\n> On 24.07.2012 14:51, Laszlo Nagy wrote:\n>\n>> * UFS is not journaled.\n>\n> There is journal support for UFS as far as i know. Please have a look \n> at the gjournal manpage.\nYes, but gjournal works for disk devices. I would have rely on the hw \ncard for RAID. When the card goes wrong I won't be able to access my data.\n\nI could also buy an identical RAID card. In fact I could buy a complete \nbackup server. But right now I don't have the money for that. So I would \nlike to use a solution that allows me to recover from a failure even if \nthe RAID card goes wrong.\n\nIt might also be possible to combine gmirror + gjournal, but that is not \ngood enough. Performance and stability of a simple gmirror with two \ndisks is much worse then a raidz array with 10 disks (and hot spare), or \neven a raid 1+0 (and hot spare) that is supported by the hw RAID card.\n\nSo I would like to stick with UFS+hw card support (and then I need to \nbuy an identical RAID card if I can), or ZFS.\n\n\n",
"msg_date": "Tue, 24 Jul 2012 20:35:49 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ZFS vs. UFS"
},
{
"msg_contents": "On Tue, Jul 24, 2012 at 11:27 AM, Laszlo Nagy <[email protected]> wrote:\n\n>\n> > I wonder if UFS has better performance or not. Or can you suggest\n>> > another fs? Just of the PGDATA directory.\n>>\n>\n> Relying on physically moving a disk isn't a good backup/recovery\n> strategy. Disks are the least reliable single component in a modern\n> computer. You should figure out the best file system for your application,\n> and separately figure out a recovery strategy, one that can survive the\n> failure of *any* component in your system, including the disk itself.\n>\n> This is why I use a RAID array of 10 disks. So there is no single point of\n> failure. What else could I do? (Yes, I can make regular backups, but that\n> is not the same. I can still loose data...)\n>\n\nOnly you can answer that because it depends on your application. If you're\noperating PayPal, you probably want 24/7 100% reliability. If you're\noperating a social networking site for teenagers, losing data is probably\nnot a catastrophe.\n\nIn my experience, most data loss is NOT from equipment failure. It's from\nsoftware bugs and operator errors. If your recovery plan doesn't cover\nthis, you have a problem.\n\nCraig\n\nOn Tue, Jul 24, 2012 at 11:27 AM, Laszlo Nagy <[email protected]> wrote:\n\n\n\n\n\n > I wonder if UFS has better performance or not. Or can you\n suggest\n > another fs? Just of the PGDATA directory.\n\n\n\n Relying on physically moving a disk isn't a good backup/recovery\n strategy. Disks are the least reliable single component in a\n modern computer. You should figure out the best file system for\n your application, and separately figure out a recovery strategy,\n one that can survive the failure of *any* component in your\n system, including the disk itself.\n\n This is why I use a RAID array of 10 disks. So there is no single\n point of failure. What else could I do? (Yes, I can make regular\n backups, but that is not the same. I can still loose data...)\n\nOnly you can answer that because it depends on your application. If you're operating PayPal, you probably want 24/7 100% reliability. If you're operating a social networking site for teenagers, losing data is probably not a catastrophe.\nIn my experience, most data loss is NOT from equipment failure. It's from software bugs and operator errors. If your recovery plan doesn't cover this, you have a problem.Craig",
"msg_date": "Tue, 24 Jul 2012 12:14:33 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ZFS vs. UFS"
},
{
"msg_contents": ">> On 24.07.2012 14:51, Laszlo Nagy wrote:\n>>\n>>> * UFS is not journaled.\n>>\n>> There is journal support for UFS as far as i know. Please have a look\n>> at the gjournal manpage.\n >\n> Yes, but gjournal works for disk devices.\n\nThat isn't completly correct! gjournal works with all GEOM-devices, \nwhich could be not only disk devices, but also (remote) disk devices, \n(remote) files, (remote) software-raids etc.\n\nIt is very easy to mirror the *complete* disk from one *server* to \nanother. I use this technic for customers which need cheap backups of \ntheir complete server.\n\nBut a RAID card will be much faster than this. I just wanted to make \nthis clear.\n\nGreetings,\nTorsten\n",
"msg_date": "Wed, 25 Jul 2012 09:00:37 +0200",
"msg_from": "Torsten Zuehlsdorff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ZFS vs. UFS"
},
{
"msg_contents": "On 07/24/2012 08:51 AM, Laszlo Nagy wrote:\n> Under FreeBSD 9, what filesystem should I use for PostgreSQL? (Dell \n> PowerEdge 2900, 24G mem, 10x2T SATA2 disk, Intel RAID controller.)\n\nWhen Intel RAID controller is that? All of the ones on the motherboard \nare pretty much useless if that's what you have. Those are slower than \nsoftware RAID and it's going to add driver issues you could otherwise \navoid. Better to connect the drives to the non-RAID ports or configure \nthe controller in JBOD mode first.\n\nUsing one of the better RAID controllers, one of Dell's good PERC models \nfor example, is one of the biggest hardware upgrades you could make to \nthis server. If your database is mostly read traffic, it won't matter \nvery much. Write-heavy loads really benefit from a good RAID \ncontroller's write cache.\n\n> * ZFS is journaled, and it is more independent of the hardware. So\n> if the computer goes wrong, I can move the zfs array to a\n> different server.\n> * UFS is not journaled. Also I have to rely on the RAID card to\n> build the RAID array. If there is a hw problem with it, then I\n> won't be able to recover the data easily.\n>\n\nYou should be able to get UFS working with a software mirror and \njournaling using gstripe/gmirror or vinum. It doesn't matter that much \nfor PostgreSQL though. The data writes are journaled by the database, \nand it tries to sync data to disk after updating metadata too. There \nare plenty of PostgreSQL installs on FreeBSD/UFS that work fine.\n\nZFS needs more RAM and has higher CPU overhead than UFS does. It's a \nheavier filesystem all around than UFS is. Your server is fast enough \nthat you should be able to afford it though, and the feature set is \nnice. In addition to the RAID setup being simple to handle, having \nchecksums on your data is a good safety feature for PostgreSQL.\n\nZFS will heavily use server RAM for caching by default, much more so \nthan UFS. Make sure you check into that, and leave enough RAM for the \ndatabase to run too. (Doing *some* caching that way is good for \nPostgres; you just don't want *all* the memory to be used for that)\n\nMoving disks to another server is a very low probability fix for a \nbroken system. The disks are a likely place for the actual failure to \nhappen at in the first place. I like to think more in terms of \"how can \nI create a real-time replica of this data?\" to protect databases, and \nthe standby server for that doesn't need to be an expensive system. \nThat said, there is no reason to set things up so that they only work \nwith that Intel RAID controller, given that it's not a very good piece \nof hardware anyway.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n\n\n\n\n\n On 07/24/2012 08:51 AM, Laszlo Nagy wrote:\n \n\n Under FreeBSD 9, what filesystem should I use for PostgreSQL?\n (Dell PowerEdge 2900, 24G mem, 10x2T SATA2 disk, Intel RAID\n controller.)\n\n\n When Intel RAID controller is that? All of the ones on the\n motherboard are pretty much useless if that's what you have. Those\n are slower than software RAID and it's going to add driver issues\n you could otherwise avoid. Better to connect the drives to the\n non-RAID ports or configure the controller in JBOD mode first.\n\n Using one of the better RAID controllers, one of Dell's good PERC\n models for example, is one of the biggest hardware upgrades you\n could make to this server. If your database is mostly read traffic,\n it won't matter very much. Write-heavy loads really benefit from a\n good RAID controller's write cache.\n\n\n\nZFS is journaled, and it is more independent of the\n hardware. So if the computer goes wrong, I can move the zfs\n array to a different server.\nUFS is not journaled. Also I have to rely on the RAID card\n to build the RAID array. If there is a hw problem with it,\n then I won't be able to recover the data easily.\n\n\n\n You should be able to get UFS working with a software mirror and\n journaling using gstripe/gmirror or vinum. It doesn't matter that\n much for PostgreSQL though. The data writes are journaled by the\n database, and it tries to sync data to disk after updating metadata\n too. There are plenty of PostgreSQL installs on FreeBSD/UFS that\n work fine.\n\n ZFS needs more RAM and has higher CPU overhead than UFS does. It's\n a heavier filesystem all around than UFS is. Your server is fast\n enough that you should be able to afford it though, and the feature\n set is nice. In addition to the RAID setup being simple to handle,\n having checksums on your data is a good safety feature for\n PostgreSQL. \n\n ZFS will heavily use server RAM for caching by default, much more so\n than UFS. Make sure you check into that, and leave enough RAM for\n the database to run too. (Doing *some* caching that way is good for\n Postgres; you just don't want *all* the memory to be used for that)\n\n Moving disks to another server is a very low probability fix for a\n broken system. The disks are a likely place for the actual failure\n to happen at in the first place. I like to think more in terms of\n \"how can I create a real-time replica of this data?\" to protect\n databases, and the standby server for that doesn't need to be an\n expensive system. That said, there is no reason to set things up so\n that they only work with that Intel RAID controller, given that it's\n not a very good piece of hardware anyway.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com",
"msg_date": "Fri, 27 Jul 2012 01:24:42 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ZFS vs. UFS"
},
{
"msg_contents": "\n> When Intel RAID controller is that? All of the ones on the \n> motherboard are pretty much useless if that's what you have. Those are \n> slower than software RAID and it's going to add driver issues you \n> could otherwise avoid. Better to connect the drives to the non-RAID \n> ports or configure the controller in JBOD mode first.\n>\n> Using one of the better RAID controllers, one of Dell's good PERC \n> models for example, is one of the biggest hardware upgrades you could \n> make to this server. If your database is mostly read traffic, it \n> won't matter very much. Write-heavy loads really benefit from a good \n> RAID controller's write cache.\nActually, it is a PERC with write-cache and BBU.\n>\n> ZFS will heavily use server RAM for caching by default, much more so \n> than UFS. Make sure you check into that, and leave enough RAM for the \n> database to run too. (Doing *some* caching that way is good for \n> Postgres; you just don't want *all* the memory to be used for that)\nRight now, the size of the database is below 5GB. So I guess it will fit \ninto memory. I'm concerned about data safety and availability. I have \nbeen in a situation where the RAID card went wrong and I was not able to \nrecover the data because I could not get an identical RAID card in time. \nI have also been in a situation where the system was crashing two times \na day, and we didn't know why. (As it turned out, it was a bug in the \n\"stable\" kernel and we could not identify this for two weeks.) However, \nwe had to do fsck after every crash. With a 10TB disk array, it was \nextremely painful. ZFS is much better: short recovery time and it is \nRAID card independent. So I think I have answered my own question - I'm \ngoing to use ZFS to have better availability, even if it leads to poor \nperformance. (That was the original question: how bad it it to use ZFS \nfor PostgreSQL, instead of the native UFS.)\n>\n> Moving disks to another server is a very low probability fix for a \n> broken system. The disks are a likely place for the actual failure to \n> happen at in the first place.\nYes, but we don't have to worry about that. raidz2 + hot spare is safe \nenough. The RAID card is the only single point of failure.\n> I like to think more in terms of \"how can I create a real-time replica \n> of this data?\" to protect databases, and the standby server for that \n> doesn't need to be an expensive system. That said, there is no reason \n> to set things up so that they only work with that Intel RAID \n> controller, given that it's not a very good piece of hardware anyway.\nI'm not sure how to create a real-time replica. This database is updated \nfrequently. There is always a process that reads/writes into the \ndatabase. I was thinking about using slony to create slave databases. I \nhave no experience with that. We have a 100Mbit connection. I'm not sure \nhow much bandwidth we need to maintain a real-time slave database. It \nmight be a good idea.\n\nI'm sorry, I feel I'm being off-topic.\n",
"msg_date": "Tue, 31 Jul 2012 10:50:11 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ZFS vs. UFS"
},
{
"msg_contents": "On Tue, Jul 31, 2012 at 1:50 AM, Laszlo Nagy <[email protected]> wrote:\n\n>\n> When Intel RAID controller is that? All of the ones on the motherboard\n>> are pretty much useless if that's what you have. Those are slower than\n>> software RAID and it's going to add driver issues you could otherwise\n>> avoid. Better to connect the drives to the non-RAID ports or configure the\n>> controller in JBOD mode first.\n>>\n>> Using one of the better RAID controllers, one of Dell's good PERC models\n>> for example, is one of the biggest hardware upgrades you could make to this\n>> server. If your database is mostly read traffic, it won't matter very\n>> much. Write-heavy loads really benefit from a good RAID controller's write\n>> cache.\n>>\n> Actually, it is a PERC with write-cache and BBU.\n>\n\nLast time I checked, \"PERC\" was a meaningless name. Dell put that label on\na variety of different controllers ... some were quite good, some were\nterrible. The latest PERC controllers are pretty good. If your machine is\na few years old, the PERC controller may be a piece of junk.\n\nCraig\n\n\n>\n>> ZFS will heavily use server RAM for caching by default, much more so than\n>> UFS. Make sure you check into that, and leave enough RAM for the database\n>> to run too. (Doing *some* caching that way is good for Postgres; you just\n>> don't want *all* the memory to be used for that)\n>>\n> Right now, the size of the database is below 5GB. So I guess it will fit\n> into memory. I'm concerned about data safety and availability. I have been\n> in a situation where the RAID card went wrong and I was not able to recover\n> the data because I could not get an identical RAID card in time. I have\n> also been in a situation where the system was crashing two times a day, and\n> we didn't know why. (As it turned out, it was a bug in the \"stable\" kernel\n> and we could not identify this for two weeks.) However, we had to do fsck\n> after every crash. With a 10TB disk array, it was extremely painful. ZFS is\n> much better: short recovery time and it is RAID card independent. So I\n> think I have answered my own question - I'm going to use ZFS to have better\n> availability, even if it leads to poor performance. (That was the original\n> question: how bad it it to use ZFS for PostgreSQL, instead of the native\n> UFS.)\n>\n>>\n>> Moving disks to another server is a very low probability fix for a broken\n>> system. The disks are a likely place for the actual failure to happen at\n>> in the first place.\n>>\n> Yes, but we don't have to worry about that. raidz2 + hot spare is safe\n> enough. The RAID card is the only single point of failure.\n>\n>> I like to think more in terms of \"how can I create a real-time replica of\n>> this data?\" to protect databases, and the standby server for that doesn't\n>> need to be an expensive system. That said, there is no reason to set\n>> things up so that they only work with that Intel RAID controller, given\n>> that it's not a very good piece of hardware anyway.\n>>\n> I'm not sure how to create a real-time replica. This database is updated\n> frequently. There is always a process that reads/writes into the database.\n> I was thinking about using slony to create slave databases. I have no\n> experience with that. We have a 100Mbit connection. I'm not sure how much\n> bandwidth we need to maintain a real-time slave database. It might be a\n> good idea.\n>\n> I'm sorry, I feel I'm being off-topic.\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.**\n> org <[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-performance<http://www.postgresql.org/mailpref/pgsql-performance>\n>\n\nOn Tue, Jul 31, 2012 at 1:50 AM, Laszlo Nagy <[email protected]> wrote:\n\n\nWhen Intel RAID controller is that? All of the ones on the motherboard are pretty much useless if that's what you have. Those are slower than software RAID and it's going to add driver issues you could otherwise avoid. Better to connect the drives to the non-RAID ports or configure the controller in JBOD mode first.\n\nUsing one of the better RAID controllers, one of Dell's good PERC models for example, is one of the biggest hardware upgrades you could make to this server. If your database is mostly read traffic, it won't matter very much. Write-heavy loads really benefit from a good RAID controller's write cache.\n\nActually, it is a PERC with write-cache and BBU.Last time I checked, \"PERC\" was a meaningless name. Dell put that label on a variety of different controllers ... some were quite good, some were terrible. The latest PERC controllers are pretty good. If your machine is a few years old, the PERC controller may be a piece of junk.\nCraig \n\n\nZFS will heavily use server RAM for caching by default, much more so than UFS. Make sure you check into that, and leave enough RAM for the database to run too. (Doing *some* caching that way is good for Postgres; you just don't want *all* the memory to be used for that)\n\nRight now, the size of the database is below 5GB. So I guess it will fit into memory. I'm concerned about data safety and availability. I have been in a situation where the RAID card went wrong and I was not able to recover the data because I could not get an identical RAID card in time. I have also been in a situation where the system was crashing two times a day, and we didn't know why. (As it turned out, it was a bug in the \"stable\" kernel and we could not identify this for two weeks.) However, we had to do fsck after every crash. With a 10TB disk array, it was extremely painful. ZFS is much better: short recovery time and it is RAID card independent. So I think I have answered my own question - I'm going to use ZFS to have better availability, even if it leads to poor performance. (That was the original question: how bad it it to use ZFS for PostgreSQL, instead of the native UFS.)\n\n\nMoving disks to another server is a very low probability fix for a broken system. The disks are a likely place for the actual failure to happen at in the first place.\n\nYes, but we don't have to worry about that. raidz2 + hot spare is safe enough. The RAID card is the only single point of failure.\n\nI like to think more in terms of \"how can I create a real-time replica of this data?\" to protect databases, and the standby server for that doesn't need to be an expensive system. That said, there is no reason to set things up so that they only work with that Intel RAID controller, given that it's not a very good piece of hardware anyway.\n\nI'm not sure how to create a real-time replica. This database is updated frequently. There is always a process that reads/writes into the database. I was thinking about using slony to create slave databases. I have no experience with that. We have a 100Mbit connection. I'm not sure how much bandwidth we need to maintain a real-time slave database. It might be a good idea.\n\nI'm sorry, I feel I'm being off-topic.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 31 Jul 2012 07:33:29 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ZFS vs. UFS"
}
] |
[
{
"msg_contents": "Hi,\n\nLacking index hints I have a bit of a problem with a slow select.\n\n select\n pic.objectid as pic_objectid\n ,pu.objectid as pu_objectid\n ,ppcr.preproc_me as pul_preproc_me\n ,pp.price_amount as pp_price_amount\n ,pim.aut_item_container as pim_aut_item_container\n ,COALESCE((select coalesce(pcg.name_preferred, pcg.name) from \ncodec_gfx pcg where pcg.objectid = pim.aut_codec_gfx), 'unknown') as \npcg_name\n ,COALESCE((select pis.name from item_snd pis where pis.objectid = \npim.aut_codec_snd), 'unknown') as pis_name\n-- ,(select pii2.price_arr from item_image pii2 where \npii2.item_common = pic.objectid) as pii_price_arr\n ,pii.price_arr as pii_price_arr\n from\n (\n select\n ppcr.item_common\n ,pul.preproc_me as preproc_me\n ,pul.ul_finished_at as ul_finished_at\n ,pul.to_curator_at as to_curator_at\n from\n pic_curate ppc\n ,pic_curate_row ppcr\n ,uploading pul\n where\n ppc.user_curator = 2 AND\n ppcr.pic_curate = ppc.objectid AND\n ppcr.item_common = pul.item_common\n ) ppcr\n ,item_common pic\n left outer join item_movieclip pim on (pim.item_common = pic.objectid)\n left outer join item_soundclip pisc on (pisc.item_common = \npic.objectid)\n left outer join item_image pii on (pii.item_common = pic.objectid)\n ,user pu\n ,pricing pp\n where\n pic.objectid = ppcr.item_common AND\n pu.objectid = pic.user AND\n pp.item_common = ppcr.item_common AND\n date_trunc ('sec', current_timestamp) BETWEEN pp.startdate and \npp.stopdate\n order by\n ppcr.item_common\n\nItem_common is the main table. It has some 10M rows\n\nThis query executes with...\n\n Nested Loop (cost=256.16..2770236.40 rows=3028 width=523) (actual \ntime=0.141..64428.788 rows=919 l\noops=1)\n -> Nested Loop (cost=256.16..2753774.01 rows=1066 width=515) \n(actual time=0.095..64414.614 rows=919 loops=1)\n -> Nested Loop (cost=256.16..2753472.18 rows=1066 width=501) \n(actual time=0.089..64411.782 rows=919 loops=1)\n -> Merge Join (cost=256.16..2750791.56 rows=1066 \nwidth=477) (actual time=0.080..64318.897 rows=919 loops=1)\n Merge Cond: (pic.objectid = ppcr.item_common)\n -> Merge Left Join (cost=251.72..2733545.74 \nrows=10970452 width=473) (actual time=0.038..63075.673 rows=10831339 \nloops=1)\n Merge Cond: (pic.objectid = pisc.item_common)\n -> Merge Left Join (cost=251.72..2689409.45 \nrows=10970452 width=457) (actual time=0.031..59173.547 rows=10831339 \nloops=1)\n Merge Cond: (pic.objectid = \npii.item_common)\n -> Merge Left Join \n(cost=251.72..1844762.76 rows=10970452 width=404) (actual \ntime=0.022..36763.334 rows=10831339 loops=1)\n Merge Cond: (pic.objectid = \npim.item_common)\n -> Index Scan using \nitem_common_pkey on item_common pic (cost=0.00..1764469.78 \nrows=10970452 width=380) (actual time=0.010..20389.141 rows=10831339 \nloops=1)\n -> Index Scan using \nitem_movieclip_pkey on item_movieclip pim (cost=0.00..34287.89 \nrows=1486673 width=28) (actual time=0.007..839.065 rows=1440175 loops=1)\n -> Index Scan using item_image_pkey \non item_image pii (cost=0.00..707403.77 rows=8785343 width=57) (actual \ntime=0.007..14972.056 rows=8701222 loops=1)\n -> Index Scan using item_soundclip_pkey on \nitem_soundclip pisc (cost=0.00..10690.67 rows=481559 width=20) (actual \ntime=0.007..252.650 rows=478672 loops=1)\n -> Materialize (cost=0.00..109.95 rows=1066 \nwidth=4) (actual time=0.019..1.792 rows=919 loops=1)\n -> Nested Loop (cost=0.00..107.28 \nrows=1066 width=4) (actual time=0.018..1.429 rows=919 loops=1)\n Join Filter: (ppc.objectid = \nppcr.pic_curate)\n -> Index Scan using \npic_curate_row_pkey on pic_curate_row ppcr (cost=0.00..58.27 rows=3199 \nwidth=8) (actual time=0.010..0.650 rows=919 loops=1)\n -> Materialize (cost=0.00..1.03 \nrows=1 width=4) (actual time=0.000..0.000 rows=1 loops=919)\n -> Seq Scan on pic_curate ppc \n(cost=0.00..1.02 rows=1 width=4) (actual time=0.005..0.006 rows=1 loops=1)\n Filter: (user_curator = 2)\n -> Index Scan using uploading_x2 on uploading pul \n(cost=0.00..2.50 rows=1 width=24) (actual time=0.100..0.100 rows=1 \nloops=919)\n Index Cond: (pul.item_common = ppcr.item_common)\n -> Index Scan using user_pkey on user pu (cost=0.00..0.27 \nrows=1 width=14) (actual time=0.002..0.002 rows=1 loops=919)\n Index Cond: (pu.objectid = pic.user)\n -> Index Scan using pricing_x1 on pricing pp (cost=0.00..3.55 \nrows=3 width=16) (actual time=0.004..0.005 rows=1 loops=919)\n Index Cond: (pp.item_common = ppcr.item_common)\n Filter: ((date_trunc('sec'::text, now()) >= pp.startdate) AND \n(date_trunc('sec'::text, now()) <= pp.stopdate))\n SubPlan 1\n -> Index Scan using codec_gfx_pkey on codec_gfx pcg \n(cost=0.00..2.26 rows=1 width=27) (actual time=0.000..0.000 rows=0 \nloops=919)\n Index Cond: (objectid = $0)\n SubPlan 2\n -> Seq Scan on item_snd pis (cost=0.00..1.90 rows=1 width=15) \n(actual time=0.007..0.008 rows=0 loops=919)\n Filter: (objectid = $1)\n Total runtime: 64429.074 ms\n(36 rows)\n\n...but if I comment out pii...\n\n-- ,pii.price_arr as pii_price_arr\n...\n-- left outer join item_image pii on (pii.item_common = pic.objectid)\n\nI get...\n\n Nested Loop (cost=0.00..9808.71 rows=1307 width=36) (actual \ntime=0.073..23.335 rows=919 loops=1)\n -> Nested Loop (cost=0.00..2681.09 rows=460 width=32) (actual \ntime=0.037..11.289 rows=919 loops=1)\n -> Nested Loop Left Join (cost=0.00..2550.85 rows=460 \nwidth=32) (actual time=0.033..9.001 rows=919 loops=1)\n -> Nested Loop (cost=0.00..2404.77 rows=460 width=20) \n(actual time=0.029..6.987 rows=919 loops=1)\n -> Nested Loop (cost=0.00..1226.38 rows=460 \nwidth=12) (actual time=0.025..4.065 rows=919 loops=1)\n -> Nested Loop (cost=0.00..50.26 rows=460 \nwidth=4) (actual time=0.018..1.095 rows=919 loops=1)\n Join Filter: (ppc.objectid = \nppcr.pic_curate)\n -> Index Scan using \npic_curate_row_pkey on pic_curate_row ppcr (cost=0.00..35.45 rows=919 \nwidth=8) (actual time=0.008..0.360 rows=919 loops=1)\n -> Materialize (cost=0.00..1.03 \nrows=1 width=4) (actual time=0.000..0.000 rows=1 loops=919)\n -> Seq Scan on pic_curate ppc \n(cost=0.00..1.02 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=1)\n Filter: (user_curator = 2)\n -> Index Scan using uploading_x2 on \nuploading pul (cost=0.00..2.54 rows=1 width=8) (actual \ntime=0.002..0.003 rows=1 loops=919)\n Index Cond: (pul.item_common = \nppcr.item_common)\n -> Index Scan using item_common_pkey on \nitem_common pic (cost=0.00..2.55 rows=1 width=8) (actual \ntime=0.002..0.003 rows=1 loops=919)\n Index Cond: (pic.objectid = ppcr.item_common)\n -> Index Scan using item_movieclip_pkey on \nitem_movieclip pim (cost=0.00..0.31 rows=1 width=16) (actual \ntime=0.002..0.002 rows=0 loops=919)\n Index Cond: (pim.item_common = pic.objectid)\n -> Index Scan using user_pkey on user pu (cost=0.00..0.27 \nrows=1 width=4) (actual time=0.002..0.002 rows=1 loops=919)\n Index Cond: (pu.objectid = pic.user)\n -> Index Scan using pricing_x1 on pricing pp (cost=0.00..3.63 \nrows=3 width=12) (actual time=0.003..0.004 rows=1 loops=919)\n Index Cond: (pp.item_common = ppcr.item_common)\n Filter: ((date_trunc('sec'::text, now()) >= pp.startdate) AND \n(date_trunc('sec'::text, now()) <= pp.stopdate))\n SubPlan 1\n -> Index Scan using codec_gfx_pkey on codec_gfx pcg \n(cost=0.00..2.26 rows=1 width=27) (actual time=0.000..0.000 rows=0 \nloops=919)\n Index Cond: (objectid = $0)\n SubPlan 2\n -> Seq Scan on item_snd pis (cost=0.00..1.90 rows=1 width=15) \n(actual time=0.007..0.008 rows=0 loops=919)\n Filter: (objectid = $1)\n Total runtime: 23.564 ms\n(29 rows)\n\nroot@pg9:/usr/local/pgsql90/data# grep -v '^#' postgresql.conf | tr '\\t' \n' ' | grep -v '^ ' | sort -u\ncheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0\ncpu_index_tuple_cost = 0.00001 # same scale as above\ndatestyle = 'iso, mdy'\ndefault_text_search_config = 'pg_catalog.english'\neffective_cache_size = 32GB\nlc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\nlisten_addresses = 'localhost,10.0.0.3' # what IP address(es) to listen on;\nmaintenance_work_mem = 2GB # min 1MB\nmax_connections = 500 # (change requires restart)\nport = 5440 # (change requires restart)\nrandom_page_cost = 1.0 # same scale as above\nshared_buffers = 12GB # min 128kB\ntemp_buffers = 64MB # min 800kB\nwal_buffers = 16MB # min 32kB\nwork_mem = 64MB # min 64kB\n\nWithout improvement i tried\nenable_seqscan = off\ncpu_index_tuple_cost = 0\nseq_page_cost = 2.0\n\nThere are several selects looking similar to this in our application \nthat suddenly jumped from a handfull of ms to many seconds. Can I \nworkaround this by config instead of rewriting the sql to an \ninrecognizable nightmare? Preferrable I'd like to turn off full table \nscan completely (where indexes are present), but that didn't bite.\n\nThanks,\nMarcus\n\n",
"msg_date": "Wed, 25 Jul 2012 17:36:09 +0200",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "odd planner again, pg 9.0.8"
},
{
"msg_contents": "Hello\n\nyou have too slow merge join\n\nmaybe you have bloated item_common_pkey or item_common relations -\ncan you try reindex or vacuum full\n\nyou use random_page_cost = 1.0 - it can be source of bad plan\n\nRegards\n\nPavel Stehule\n\n2012/7/25 Marcus Engene <[email protected]>:\n> Hi,\n>\n> Lacking index hints I have a bit of a problem with a slow select.\n>\n> select\n> pic.objectid as pic_objectid\n> ,pu.objectid as pu_objectid\n> ,ppcr.preproc_me as pul_preproc_me\n> ,pp.price_amount as pp_price_amount\n> ,pim.aut_item_container as pim_aut_item_container\n> ,COALESCE((select coalesce(pcg.name_preferred, pcg.name) from codec_gfx\n> pcg where pcg.objectid = pim.aut_codec_gfx), 'unknown') as pcg_name\n> ,COALESCE((select pis.name from item_snd pis where pis.objectid =\n> pim.aut_codec_snd), 'unknown') as pis_name\n> -- ,(select pii2.price_arr from item_image pii2 where\n> pii2.item_common = pic.objectid) as pii_price_arr\n> ,pii.price_arr as pii_price_arr\n> from\n> (\n> select\n> ppcr.item_common\n> ,pul.preproc_me as preproc_me\n> ,pul.ul_finished_at as ul_finished_at\n> ,pul.to_curator_at as to_curator_at\n> from\n> pic_curate ppc\n> ,pic_curate_row ppcr\n> ,uploading pul\n> where\n> ppc.user_curator = 2 AND\n> ppcr.pic_curate = ppc.objectid AND\n> ppcr.item_common = pul.item_common\n> ) ppcr\n> ,item_common pic\n> left outer join item_movieclip pim on (pim.item_common = pic.objectid)\n> left outer join item_soundclip pisc on (pisc.item_common =\n> pic.objectid)\n> left outer join item_image pii on (pii.item_common = pic.objectid)\n> ,user pu\n> ,pricing pp\n> where\n> pic.objectid = ppcr.item_common AND\n> pu.objectid = pic.user AND\n> pp.item_common = ppcr.item_common AND\n> date_trunc ('sec', current_timestamp) BETWEEN pp.startdate and\n> pp.stopdate\n> order by\n> ppcr.item_common\n>\n> Item_common is the main table. It has some 10M rows\n>\n> This query executes with...\n>\n> Nested Loop (cost=256.16..2770236.40 rows=3028 width=523) (actual\n> time=0.141..64428.788 rows=919 l\n> oops=1)\n> -> Nested Loop (cost=256.16..2753774.01 rows=1066 width=515) (actual\n> time=0.095..64414.614 rows=919 loops=1)\n> -> Nested Loop (cost=256.16..2753472.18 rows=1066 width=501)\n> (actual time=0.089..64411.782 rows=919 loops=1)\n> -> Merge Join (cost=256.16..2750791.56 rows=1066 width=477)\n> (actual time=0.080..64318.897 rows=919 loops=1)\n> Merge Cond: (pic.objectid = ppcr.item_common)\n> -> Merge Left Join (cost=251.72..2733545.74\n> rows=10970452 width=473) (actual time=0.038..63075.673 rows=10831339\n> loops=1)\n> Merge Cond: (pic.objectid = pisc.item_common)\n> -> Merge Left Join (cost=251.72..2689409.45\n> rows=10970452 width=457) (actual time=0.031..59173.547 rows=10831339\n> loops=1)\n> Merge Cond: (pic.objectid =\n> pii.item_common)\n> -> Merge Left Join\n> (cost=251.72..1844762.76 rows=10970452 width=404) (actual\n> time=0.022..36763.334 rows=10831339 loops=1)\n> Merge Cond: (pic.objectid =\n> pim.item_common)\n> -> Index Scan using item_common_pkey\n> on item_common pic (cost=0.00..1764469.78 rows=10970452 width=380) (actual\n> time=0.010..20389.141 rows=10831339 loops=1)\n> -> Index Scan using\n> item_movieclip_pkey on item_movieclip pim (cost=0.00..34287.89 rows=1486673\n> width=28) (actual time=0.007..839.065 rows=1440175 loops=1)\n> -> Index Scan using item_image_pkey on\n> item_image pii (cost=0.00..707403.77 rows=8785343 width=57) (actual\n> time=0.007..14972.056 rows=8701222 loops=1)\n> -> Index Scan using item_soundclip_pkey on\n> item_soundclip pisc (cost=0.00..10690.67 rows=481559 width=20) (actual\n> time=0.007..252.650 rows=478672 loops=1)\n> -> Materialize (cost=0.00..109.95 rows=1066 width=4)\n> (actual time=0.019..1.792 rows=919 loops=1)\n> -> Nested Loop (cost=0.00..107.28 rows=1066\n> width=4) (actual time=0.018..1.429 rows=919 loops=1)\n> Join Filter: (ppc.objectid =\n> ppcr.pic_curate)\n> -> Index Scan using pic_curate_row_pkey on\n> pic_curate_row ppcr (cost=0.00..58.27 rows=3199 width=8) (actual\n> time=0.010..0.650 rows=919 loops=1)\n> -> Materialize (cost=0.00..1.03 rows=1\n> width=4) (actual time=0.000..0.000 rows=1 loops=919)\n> -> Seq Scan on pic_curate ppc\n> (cost=0.00..1.02 rows=1 width=4) (actual time=0.005..0.006 rows=1 loops=1)\n> Filter: (user_curator = 2)\n> -> Index Scan using uploading_x2 on uploading pul\n> (cost=0.00..2.50 rows=1 width=24) (actual time=0.100..0.100 rows=1\n> loops=919)\n> Index Cond: (pul.item_common = ppcr.item_common)\n> -> Index Scan using user_pkey on user pu (cost=0.00..0.27 rows=1\n> width=14) (actual time=0.002..0.002 rows=1 loops=919)\n> Index Cond: (pu.objectid = pic.user)\n> -> Index Scan using pricing_x1 on pricing pp (cost=0.00..3.55 rows=3\n> width=16) (actual time=0.004..0.005 rows=1 loops=919)\n> Index Cond: (pp.item_common = ppcr.item_common)\n> Filter: ((date_trunc('sec'::text, now()) >= pp.startdate) AND\n> (date_trunc('sec'::text, now()) <= pp.stopdate))\n> SubPlan 1\n> -> Index Scan using codec_gfx_pkey on codec_gfx pcg (cost=0.00..2.26\n> rows=1 width=27) (actual time=0.000..0.000 rows=0 loops=919)\n> Index Cond: (objectid = $0)\n> SubPlan 2\n> -> Seq Scan on item_snd pis (cost=0.00..1.90 rows=1 width=15) (actual\n> time=0.007..0.008 rows=0 loops=919)\n> Filter: (objectid = $1)\n> Total runtime: 64429.074 ms\n> (36 rows)\n>\n> ...but if I comment out pii...\n>\n> -- ,pii.price_arr as pii_price_arr\n> ...\n> -- left outer join item_image pii on (pii.item_common = pic.objectid)\n>\n> I get...\n>\n> Nested Loop (cost=0.00..9808.71 rows=1307 width=36) (actual\n> time=0.073..23.335 rows=919 loops=1)\n> -> Nested Loop (cost=0.00..2681.09 rows=460 width=32) (actual\n> time=0.037..11.289 rows=919 loops=1)\n> -> Nested Loop Left Join (cost=0.00..2550.85 rows=460 width=32)\n> (actual time=0.033..9.001 rows=919 loops=1)\n> -> Nested Loop (cost=0.00..2404.77 rows=460 width=20)\n> (actual time=0.029..6.987 rows=919 loops=1)\n> -> Nested Loop (cost=0.00..1226.38 rows=460 width=12)\n> (actual time=0.025..4.065 rows=919 loops=1)\n> -> Nested Loop (cost=0.00..50.26 rows=460\n> width=4) (actual time=0.018..1.095 rows=919 loops=1)\n> Join Filter: (ppc.objectid =\n> ppcr.pic_curate)\n> -> Index Scan using pic_curate_row_pkey on\n> pic_curate_row ppcr (cost=0.00..35.45 rows=919 width=8) (actual\n> time=0.008..0.360 rows=919 loops=1)\n> -> Materialize (cost=0.00..1.03 rows=1\n> width=4) (actual time=0.000..0.000 rows=1 loops=919)\n> -> Seq Scan on pic_curate ppc\n> (cost=0.00..1.02 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=1)\n> Filter: (user_curator = 2)\n> -> Index Scan using uploading_x2 on uploading\n> pul (cost=0.00..2.54 rows=1 width=8) (actual time=0.002..0.003 rows=1\n> loops=919)\n> Index Cond: (pul.item_common =\n> ppcr.item_common)\n> -> Index Scan using item_common_pkey on item_common\n> pic (cost=0.00..2.55 rows=1 width=8) (actual time=0.002..0.003 rows=1\n> loops=919)\n> Index Cond: (pic.objectid = ppcr.item_common)\n> -> Index Scan using item_movieclip_pkey on item_movieclip\n> pim (cost=0.00..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=0\n> loops=919)\n> Index Cond: (pim.item_common = pic.objectid)\n> -> Index Scan using user_pkey on user pu (cost=0.00..0.27 rows=1\n> width=4) (actual time=0.002..0.002 rows=1 loops=919)\n> Index Cond: (pu.objectid = pic.user)\n> -> Index Scan using pricing_x1 on pricing pp (cost=0.00..3.63 rows=3\n> width=12) (actual time=0.003..0.004 rows=1 loops=919)\n> Index Cond: (pp.item_common = ppcr.item_common)\n> Filter: ((date_trunc('sec'::text, now()) >= pp.startdate) AND\n> (date_trunc('sec'::text, now()) <= pp.stopdate))\n> SubPlan 1\n> -> Index Scan using codec_gfx_pkey on codec_gfx pcg (cost=0.00..2.26\n> rows=1 width=27) (actual time=0.000..0.000 rows=0 loops=919)\n> Index Cond: (objectid = $0)\n> SubPlan 2\n> -> Seq Scan on item_snd pis (cost=0.00..1.90 rows=1 width=15) (actual\n> time=0.007..0.008 rows=0 loops=919)\n> Filter: (objectid = $1)\n> Total runtime: 23.564 ms\n> (29 rows)\n>\n> root@pg9:/usr/local/pgsql90/data# grep -v '^#' postgresql.conf | tr '\\t' ' '\n> | grep -v '^ ' | sort -u\n> checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0\n> cpu_index_tuple_cost = 0.00001 # same scale as above\n> datestyle = 'iso, mdy'\n> default_text_search_config = 'pg_catalog.english'\n> effective_cache_size = 32GB\n> lc_messages = 'en_US.UTF-8' # locale for system error message\n> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n> lc_time = 'en_US.UTF-8' # locale for time formatting\n> listen_addresses = 'localhost,10.0.0.3' # what IP address(es) to listen on;\n> maintenance_work_mem = 2GB # min 1MB\n> max_connections = 500 # (change requires restart)\n> port = 5440 # (change requires restart)\n> random_page_cost = 1.0 # same scale as above\n> shared_buffers = 12GB # min 128kB\n> temp_buffers = 64MB # min 800kB\n> wal_buffers = 16MB # min 32kB\n> work_mem = 64MB # min 64kB\n>\n> Without improvement i tried\n> enable_seqscan = off\n> cpu_index_tuple_cost = 0\n> seq_page_cost = 2.0\n>\n> There are several selects looking similar to this in our application that\n> suddenly jumped from a handfull of ms to many seconds. Can I workaround this\n> by config instead of rewriting the sql to an inrecognizable nightmare?\n> Preferrable I'd like to turn off full table scan completely (where indexes\n> are present), but that didn't bite.\n>\n> Thanks,\n> Marcus\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Jul 2012 18:19:52 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd planner again, pg 9.0.8"
},
{
"msg_contents": "Marcus Engene <[email protected]> writes:\n> Lacking index hints I have a bit of a problem with a slow select.\n\nI don't think you need index hints. What you probably do need is to\nincrease join_collapse_limit and/or from_collapse_limit to deal with\nthis complex query as a whole.\n\n> There are several selects looking similar to this in our application \n> that suddenly jumped from a handfull of ms to many seconds.\n\nPerhaps you had those settings adjusted properly and somebody turned\nthem off again?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 25 Jul 2012 12:39:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd planner again, pg 9.0.8"
},
{
"msg_contents": "On 7/25/12 6:39 PM, Tom Lane wrote:\n> Marcus Engene <[email protected]> writes:\n>> Lacking index hints I have a bit of a problem with a slow select.\n> I don't think you need index hints. What you probably do need is to\n> increase join_collapse_limit and/or from_collapse_limit to deal with\n> this complex query as a whole.\n>\n>> There are several selects looking similar to this in our application\n>> that suddenly jumped from a handfull of ms to many seconds.\n> Perhaps you had those settings adjusted properly and somebody turned\n> them off again?\n>\n> \t\t\tregards, tom lane\n>\nWonderful mr Lane, now the query executes amazingly fast! I increased \nfrom_collapse_limit from it default 8 to 10 and it behaves as expected.\n\nThank you!\nMarcus\n\n Sort (cost=10628.68..10631.95 rows=1307 width=89) (actual \ntime=26.430..26.493 rows=919 loops=1)\n Sort Key: ppcr.item_common\n Sort Method: quicksort Memory: 154kB\n -> Nested Loop (cost=0.00..10561.03 rows=1307 width=89) (actual \ntime=0.093..25.612 rows=919 loops=1)\n -> Nested Loop (cost=0.00..3433.41 rows=460 width=85) \n(actual time=0.061..13.257 rows=919 loops=1)\n -> Nested Loop Left Join (cost=0.00..3134.45 rows=460 \nwidth=85) (actual time=0.057..10.972 rows=919 loops=1)\n -> Nested Loop Left Join (cost=0.00..2706.99 \nrows=460 width=32) (actual time=0.053..9.092 rows=919 loops=1)\n -> Nested Loop (cost=0.00..2391.21 \nrows=460 width=20) (actual time=0.047..6.964 rows=919 loops=1)\n -> Nested Loop (cost=0.00..1212.82 \nrows=460 width=12) (actual time=0.039..3.756 rows=919 loops=1)\n -> Nested Loop \n(cost=0.00..36.70 rows=460 width=4) (actual time=0.028..0.436 rows=919 \nloops=1)\n Join Filter: (ppc.objectid \n= ppcr.pic_curate)\n -> Seq Scan on pic_curate \nppc (cost=0.00..1.02 rows=1 width=4) (actual time=0.006..0.006 rows=1 \nloops=1)\n Filter: \n(user_curator = 2)\n -> Seq Scan on \npic_curate_row ppcr (cost=0.00..24.19 rows=919 width=8) (actual \ntime=0.019..0.147 rows=919 loops=1)\n -> Index Scan using \nuploading_x2 on uploading pul (cost=0.00..2.54 rows=1 width=8) (actual \ntime=0.003..0.003 rows=1 loops=919)\n Index Cond: \n(pul.item_common = ppcr.item_common)\n -> Index Scan using item_common_pkey \non item_common pic (cost=0.00..2.55 rows=1 width=8) (actual \ntime=0.003..0.003 rows=1 loops=919)\n Index Cond: (pic.objectid = \nppcr.item_common)\n -> Index Scan using item_movieclip_pkey on \nitem_movieclip pim (cost=0.00..0.67 rows=1 width=16) (actual \ntime=0.002..0.002 rows=0 loops=919)\n Index Cond: (pim.item_common = \npic.objectid)\n -> Index Scan using item_image_pkey on item_image \npii (cost=0.00..0.92 rows=1 width=57) (actual time=0.002..0.002 rows=0 \nloops=919)\n Index Cond: (pii.item_common = pic.objectid)\n -> Index Scan using user_pkey on user pu \n(cost=0.00..0.64 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=919)\n Index Cond: (pu.objectid = pic.user)\n -> Index Scan using pricing_x1 on pricing pp (cost=0.00..3.63 \nrows=3 width=12) (actual time=0.004..0.004 rows=1 loops=919)\n Index Cond: (pp.item_common = ppcr.item_common)\n Filter: ((date_trunc('sec'::text, now()) >= \npp.startdate) AND (date_trunc('sec'::text, now()) <= pp.stopdate))\n SubPlan 1\n -> Index Scan using codec_gfx_pkey on codec_gfx pcg \n(cost=0.00..2.26 rows=1 width=27) (actual time=0.000..0.000 rows=0 \nloops=919)\n Index Cond: (objectid = $0)\n SubPlan 2\n -> Seq Scan on item_snd pis (cost=0.00..1.90 rows=1 \nwidth=15) (actual time=0.007..0.008 rows=0 loops=919)\n Filter: (objectid = $1)\n Total runtime: 26.795 ms\n(34 rows)\n\n\n",
"msg_date": "Wed, 25 Jul 2012 20:11:55 +0200",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: odd planner again, pg 9.0.8"
}
] |
[
{
"msg_contents": "Hi,\n\nI've read a little bit about join_collapse_limit and from_collapse_limit \nand I see their reason to exist.\n\nA stupid question: in algorithms 101 you're usually told to make a chess \nprogram and then you usually do a width first min max tree. A low level \nopponent would interrupt this possibly infinite traversal early, thus \nreturning a possibly bad move, and if it's on a higher level it's \nallowed to work longer and it will likely present a better path in the tree.\n\nI understood it as that the *_collapse_limits are to stop a worst case \njoin making the optimizer going haywire, but it feels sad that trivial \nbig joins are cut off even if they're not too nasty.\n\nWhy would it not make some sense to have some time/space constraint on \nthe join heuristics instead of/in combination to how the limit presently \nwork? If we hit the ceiling, the best produced plan so far is used. The \nchess analogy would obviously be a handful chess pieces left but the \nmin-max-tree traversal constraint is on a low depth (rather than \ntime/memory) so it would quickly traverse the few options and then be \nconstrained.\n\nBest regards,\nMarcus\n\n",
"msg_date": "Thu, 26 Jul 2012 16:42:30 +0200",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "planner, *_collapse_limit"
},
{
"msg_contents": "On Thu, Jul 26, 2012 at 9:42 AM, Marcus Engene <[email protected]> wrote:\n> Hi,\n>\n> I've read a little bit about join_collapse_limit and from_collapse_limit and\n> I see their reason to exist.\n>\n> A stupid question: in algorithms 101 you're usually told to make a chess\n> program and then you usually do a width first min max tree. A low level\n> opponent would interrupt this possibly infinite traversal early, thus\n> returning a possibly bad move, and if it's on a higher level it's allowed to\n> work longer and it will likely present a better path in the tree.\n>\n> I understood it as that the *_collapse_limits are to stop a worst case join\n> making the optimizer going haywire, but it feels sad that trivial big joins\n> are cut off even if they're not too nasty.\n>\n> Why would it not make some sense to have some time/space constraint on the\n> join heuristics instead of/in combination to how the limit presently work?\n> If we hit the ceiling, the best produced plan so far is used. The chess\n> analogy would obviously be a handful chess pieces left but the min-max-tree\n> traversal constraint is on a low depth (rather than time/memory) so it would\n> quickly traverse the few options and then be constrained.\n\nWell, isn't it the point of the genetic optimizer to solve exactly\nthat problem? I do find it interesting though that there is a window\nbetween collapse limit and geqo.\n\nmerlin\n",
"msg_date": "Thu, 26 Jul 2012 12:01:00 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner, *_collapse_limit"
}
] |
[
{
"msg_contents": "In my postgres log I saw a lot of warning like this.\n\n \n\nWARNING: pgstat wait timeout\n\n \n\nEvery 10 seconds aprox since yesterday and after one year working without\nany warning\n\n \n\nI have postgres 9.0.3 on a Windows Server 2008 R2.\n\n \n\nI have only one big table with aprox. 1,300,000,000 (yes 1,300 millions)\nrows the table is not used just keep historical record, could be this the\nreason, some autovacuum over this table?\n\n \n\npg_stat_activity not show any autovacuum over this table but yes over the\ntransactional table, right now the autovacuum about transactional table is\ntaking 30 minutes (now() - xact_start )\n\n \n\n \n\nThanks\n\n \n\n\nIn my postgres log I saw a lot of warning like this… WARNING: pgstat wait timeout Every 10 seconds aprox since yesterday and after one year working without any warning I have postgres 9.0.3 on a Windows Server 2008 R2. I have only one big table with aprox. 1,300,000,000 (yes 1,300 millions) rows the table is not used just keep historical record, could be this the reason, some autovacuum over this table? pg_stat_activity not show any autovacuum over this table but yes over the transactional table, right now the autovacuum about transactional table is taking 30 minutes (now() - xact_start ) Thanks",
"msg_date": "Fri, 27 Jul 2012 18:03:33 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgstat wait timeout"
}
] |
[
{
"msg_contents": "More information. \n\n \n\nAfter many \"WARNING: pgstat wait timeout\" in the log also appear \"ERROR:\ncanceling autovacuum task \"\n\n \n\n \n\n \n\n \n\nDe: Anibal David Acosta [mailto:[email protected]] \nEnviado el: viernes, 27 de julio de 2012 06:04 p.m.\nPara: [email protected]\nAsunto: pgstat wait timeout\n\n \n\nIn my postgres log I saw a lot of warning like this.\n\n \n\nWARNING: pgstat wait timeout\n\n \n\nEvery 10 seconds aprox since yesterday and after one year working without\nany warning\n\n \n\nI have postgres 9.0.3 on a Windows Server 2008 R2.\n\n \n\nI have only one big table with aprox. 1,300,000,000 (yes 1,300 millions)\nrows the table is not used just keep historical record, could be this the\nreason, some autovacuum over this table?\n\n \n\npg_stat_activity not show any autovacuum over this table but yes over the\ntransactional table, right now the autovacuum about transactional table is\ntaking 30 minutes (now() - xact_start )\n\n \n\n \n\nThanks\n\n \n\n\nMore information… After many “WARNING: pgstat wait timeout” in the log also appear “ERROR: canceling autovacuum task “ De: Anibal David Acosta [mailto:[email protected]] Enviado el: viernes, 27 de julio de 2012 06:04 p.m.Para: [email protected]: pgstat wait timeout In my postgres log I saw a lot of warning like this… WARNING: pgstat wait timeout Every 10 seconds aprox since yesterday and after one year working without any warning I have postgres 9.0.3 on a Windows Server 2008 R2. I have only one big table with aprox. 1,300,000,000 (yes 1,300 millions) rows the table is not used just keep historical record, could be this the reason, some autovacuum over this table? pg_stat_activity not show any autovacuum over this table but yes over the transactional table, right now the autovacuum about transactional table is taking 30 minutes (now() - xact_start ) Thanks",
"msg_date": "Fri, 27 Jul 2012 18:31:21 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgstat wait timeout"
}
] |
[
{
"msg_contents": "Hi,\n\nI am running postgres 9.1.4 on Ubuntu 12.04 and the stats collector is \ngenerating very high IO usage even when nothing appears to be happening \non the system.\n\nI have roughly 150 different databases, each of which is running in 1 of \nroughly 30 tablespaces. The databases are small (the dump of most is \nare under 100M, and all but 3 are under 1G, nothing larger than 2G).\n\nPreviously iotop reported the disk write speed, at ~6MB / second. I \nwent and reset the stats for every database and that shrunk the stats \nfile and brought the IO it down to 1MB / second. I still think this is \ntoo high for an idle database. I've now noticed it is growing.\n\nls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n-rw------- 1 postgres postgres 3515080 Jul 28 11:58 \n/var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n\n*<reset of stats> *\n\nls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n-rw------- 1 postgres postgres 514761 Jul 28 12:11 \n/var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n\n*<watch the file grow>*\n\nls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n-rw------- 1 postgres postgres 776711 Jul 28 12:25 \n/var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n\nIn the 15 minutes since the reset, IO has nearly doubled to 1.6+ MB / \nsecond.\n\nFWIW, I just migrated all these databases over to this new server by \nrestoring from pg_dump I was previously experiencing this on 8.3, which \nwas why I upgraded to 9.1 and I also have another server with similar \nproblems on 9.1.\n\nAny help would be sincerely appreciated.\n\n\nDavid Barton [email protected]\n\n\n\n\n\n\n Hi,\n\n I am running postgres 9.1.4 on Ubuntu 12.04 and the stats collector\n is generating very high IO usage even when nothing appears to be\n happening on the system.\n\n I have roughly 150 different databases, each of which is running in\n 1 of roughly 30 tablespaces. The databases are small (the dump of\n most is are under 100M, and all but 3 are under 1G, nothing larger\n than 2G).\n\n Previously iotop reported the disk write speed, at ~6MB / second. I\n went and reset the stats for every database and that shrunk the\n stats file and brought the IO it down to 1MB / second. I still\n think this is too high for an idle database. I've now noticed it is\n growing.\n\n ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n -rw------- 1 postgres postgres 3515080 Jul 28 11:58\n /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n\n<reset of stats> \n\n ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n -rw------- 1 postgres postgres 514761 Jul 28 12:11\n /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n\n<watch the file grow>\n\n ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n -rw------- 1 postgres postgres 776711 Jul 28 12:25\n /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n\n In the 15 minutes since the reset, IO has nearly doubled to 1.6+ MB\n / second.\n\n FWIW, I just migrated all these databases over to this new server by\n restoring from pg_dump I was previously experiencing this on 8.3,\n which was why I upgraded to 9.1 and I also have another server with\n similar problems on 9.1.\n\n Any help would be sincerely appreciated.\n\n\n David Barton [email protected]",
"msg_date": "Sat, 28 Jul 2012 12:33:17 +0800",
"msg_from": "David Barton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "Hello\n\nI had same problem with large numbers of tables - you can move\npg_stat_tmp to tmpfs filesystem - it was solution for us\n\nRegards\n\nPavel\n\n2012/7/28 David Barton <[email protected]>:\n> Hi,\n>\n> I am running postgres 9.1.4 on Ubuntu 12.04 and the stats collector is\n> generating very high IO usage even when nothing appears to be happening on\n> the system.\n>\n> I have roughly 150 different databases, each of which is running in 1 of\n> roughly 30 tablespaces. The databases are small (the dump of most is are\n> under 100M, and all but 3 are under 1G, nothing larger than 2G).\n>\n> Previously iotop reported the disk write speed, at ~6MB / second. I went\n> and reset the stats for every database and that shrunk the stats file and\n> brought the IO it down to 1MB / second. I still think this is too high for\n> an idle database. I've now noticed it is growing.\n>\n> ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n> -rw------- 1 postgres postgres 3515080 Jul 28 11:58\n> /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>\n> <reset of stats>\n>\n> ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n> -rw------- 1 postgres postgres 514761 Jul 28 12:11\n> /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>\n> <watch the file grow>\n>\n> ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n> -rw------- 1 postgres postgres 776711 Jul 28 12:25\n> /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>\n> In the 15 minutes since the reset, IO has nearly doubled to 1.6+ MB /\n> second.\n>\n> FWIW, I just migrated all these databases over to this new server by\n> restoring from pg_dump I was previously experiencing this on 8.3, which was\n> why I upgraded to 9.1 and I also have another server with similar problems\n> on 9.1.\n>\n> Any help would be sincerely appreciated.\n>\n>\n> David Barton [email protected]\n",
"msg_date": "Sat, 28 Jul 2012 09:07:48 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "David Barton <[email protected]> writes:\n> I am running postgres 9.1.4 on Ubuntu 12.04 and the stats collector is \n> generating very high IO usage even when nothing appears to be happening \n> on the system.\n\n> I have roughly 150 different databases, each of which is running in 1 of \n> roughly 30 tablespaces. The databases are small (the dump of most is \n> are under 100M, and all but 3 are under 1G, nothing larger than 2G).\n\nThat's a lot of databases. I think your problem probably stems from\nautovacuum madly trying to cover all of them. Backing off (increasing)\nautovacuum_naptime to slow its cycle might help.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 28 Jul 2012 12:13:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "Thanks so much, Tom.\n\nThat did the job. I increased it to every 15 minutes and it has dropped \nsubstantially even though the pgstat.stat file is over 1 MB again.\n\nIt is unfortunate that the IO utilisation of this seems to be O(n^2) as \nthat is a big impediment to shared hosting. Is there any type of bounty \nsystem where people can contribute to developing features?\n\nRegards, David\n\nOn 29/07/12 00:13, Tom Lane wrote:\n> David Barton <[email protected]> writes:\n>> I am running postgres 9.1.4 on Ubuntu 12.04 and the stats collector is\n>> generating very high IO usage even when nothing appears to be happening\n>> on the system.\n>> I have roughly 150 different databases, each of which is running in 1 of\n>> roughly 30 tablespaces. The databases are small (the dump of most is\n>> are under 100M, and all but 3 are under 1G, nothing larger than 2G).\n> That's a lot of databases. I think your problem probably stems from\n> autovacuum madly trying to cover all of them. Backing off (increasing)\n> autovacuum_naptime to slow its cycle might help.\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Sun, 29 Jul 2012 01:08:10 +0800",
"msg_from": "David Barton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "Thanks, Pavel.\n\nI was a bit worried about trying this because of a comment elsewhere \nthat the file was supposed to be permanent. Tom's solution of \nincreasing the vacuum delay has solved it for now.\n\nRegards, David\n\n\nOn 28/07/12 15:07, Pavel Stehule wrote:\n> Hello\n>\n> I had same problem with large numbers of tables - you can move\n> pg_stat_tmp to tmpfs filesystem - it was solution for us\n>\n> Regards\n>\n> Pavel\n>\n> 2012/7/28 David Barton <[email protected]>:\n>> Hi,\n>>\n>> <snip>\n\n-- \n\n*David Barton - Managing Director*\n1iT Pty Ltd \"The Power of One\"\n\nTel: (08) 9382 2296\nDirect: (08) 9200 4269\nMob: 0404 863 671\nFax: (08) 6210 1354\nWeb: www.1it.com.au\n\nFirst Floor\n41 Oxford Close\nWest Leederville, 6007",
"msg_date": "Sun, 29 Jul 2012 01:11:54 +0800",
"msg_from": "David Barton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "On 29/07/12 05:08, David Barton wrote:\n> Thanks so much, Tom.\n>\n> That did the job. I increased it to every 15 minutes and it has \n> dropped substantially even though the pgstat.stat file is over 1 MB \n> again.\n>\n>\n\nI'd be a little concerned that having autovacuum tuned down to run so \ninfrequently might result in massive table bloat - which could be even \nworse than than the original stats file problem. Keep an eye out for \nany table(s) that used to be small but become unexpectedly larger over \nthe next few days, and if you see any then decrease the naptime again.\n\nRegards\n\nMark\n\n\n\n",
"msg_date": "Tue, 31 Jul 2012 13:21:04 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "That's not a good way of doing it, since you loose persistent storage.\n\nInstead, you should set the stats_temp_dir paramter to a filesystem\nsomewhere else that is tmpfs. Then PostgreSQL will automatically move\nthe file to and from the main data directory on startup and shutdown,\nso you get both the performance of tmpfs and the persistent\nstatistics.\n\n//Magnus\n\nOn Sat, Jul 28, 2012 at 9:07 AM, Pavel Stehule <[email protected]> wrote:\n> Hello\n>\n> I had same problem with large numbers of tables - you can move\n> pg_stat_tmp to tmpfs filesystem - it was solution for us\n>\n> Regards\n>\n> Pavel\n>\n> 2012/7/28 David Barton <[email protected]>:\n>> Hi,\n>>\n>> I am running postgres 9.1.4 on Ubuntu 12.04 and the stats collector is\n>> generating very high IO usage even when nothing appears to be happening on\n>> the system.\n>>\n>> I have roughly 150 different databases, each of which is running in 1 of\n>> roughly 30 tablespaces. The databases are small (the dump of most is are\n>> under 100M, and all but 3 are under 1G, nothing larger than 2G).\n>>\n>> Previously iotop reported the disk write speed, at ~6MB / second. I went\n>> and reset the stats for every database and that shrunk the stats file and\n>> brought the IO it down to 1MB / second. I still think this is too high for\n>> an idle database. I've now noticed it is growing.\n>>\n>> ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>> -rw------- 1 postgres postgres 3515080 Jul 28 11:58\n>> /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>>\n>> <reset of stats>\n>>\n>> ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>> -rw------- 1 postgres postgres 514761 Jul 28 12:11\n>> /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>>\n>> <watch the file grow>\n>>\n>> ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>> -rw------- 1 postgres postgres 776711 Jul 28 12:25\n>> /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>>\n>> In the 15 minutes since the reset, IO has nearly doubled to 1.6+ MB /\n>> second.\n>>\n>> FWIW, I just migrated all these databases over to this new server by\n>> restoring from pg_dump I was previously experiencing this on 8.3, which was\n>> why I upgraded to 9.1 and I also have another server with similar problems\n>> on 9.1.\n>>\n>> Any help would be sincerely appreciated.\n>>\n>>\n>> David Barton [email protected]\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n",
"msg_date": "Mon, 6 Aug 2012 16:10:20 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "2012/8/6 Magnus Hagander <[email protected]>:\n> That's not a good way of doing it, since you loose persistent storage.\n>\n> Instead, you should set the stats_temp_dir paramter to a filesystem\n> somewhere else that is tmpfs. Then PostgreSQL will automatically move\n> the file to and from the main data directory on startup and shutdown,\n> so you get both the performance of tmpfs and the persistent\n> statistics.\n\nwe had to do it because our read/write of stat file created really\nhigh IO - and it was problem on Amazon :( - probably we had not this\nissue elsewhere\n\nRegards\n\nPavel\n\n\n\n>\n> //Magnus\n>\n> On Sat, Jul 28, 2012 at 9:07 AM, Pavel Stehule <[email protected]> wrote:\n>> Hello\n>>\n>> I had same problem with large numbers of tables - you can move\n>> pg_stat_tmp to tmpfs filesystem - it was solution for us\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> 2012/7/28 David Barton <[email protected]>:\n>>> Hi,\n>>>\n>>> I am running postgres 9.1.4 on Ubuntu 12.04 and the stats collector is\n>>> generating very high IO usage even when nothing appears to be happening on\n>>> the system.\n>>>\n>>> I have roughly 150 different databases, each of which is running in 1 of\n>>> roughly 30 tablespaces. The databases are small (the dump of most is are\n>>> under 100M, and all but 3 are under 1G, nothing larger than 2G).\n>>>\n>>> Previously iotop reported the disk write speed, at ~6MB / second. I went\n>>> and reset the stats for every database and that shrunk the stats file and\n>>> brought the IO it down to 1MB / second. I still think this is too high for\n>>> an idle database. I've now noticed it is growing.\n>>>\n>>> ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>>> -rw------- 1 postgres postgres 3515080 Jul 28 11:58\n>>> /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>>>\n>>> <reset of stats>\n>>>\n>>> ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>>> -rw------- 1 postgres postgres 514761 Jul 28 12:11\n>>> /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>>>\n>>> <watch the file grow>\n>>>\n>>> ls -l /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>>> -rw------- 1 postgres postgres 776711 Jul 28 12:25\n>>> /var/lib/postgresql/9.1/main/pg_stat_tmp/pgstat.stat\n>>>\n>>> In the 15 minutes since the reset, IO has nearly doubled to 1.6+ MB /\n>>> second.\n>>>\n>>> FWIW, I just migrated all these databases over to this new server by\n>>> restoring from pg_dump I was previously experiencing this on 8.3, which was\n>>> why I upgraded to 9.1 and I also have another server with similar problems\n>>> on 9.1.\n>>>\n>>> Any help would be sincerely appreciated.\n>>>\n>>>\n>>> David Barton [email protected]\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n> --\n> Magnus Hagander\n> Me: http://www.hagander.net/\n> Work: http://www.redpill-linpro.com/\n",
"msg_date": "Mon, 6 Aug 2012 16:16:12 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "On Mon, Aug 6, 2012 at 4:16 PM, Pavel Stehule <[email protected]> wrote:\n> 2012/8/6 Magnus Hagander <[email protected]>:\n>> That's not a good way of doing it, since you loose persistent storage.\n>>\n>> Instead, you should set the stats_temp_dir paramter to a filesystem\n>> somewhere else that is tmpfs. Then PostgreSQL will automatically move\n>> the file to and from the main data directory on startup and shutdown,\n>> so you get both the performance of tmpfs and the persistent\n>> statistics.\n>\n> we had to do it because our read/write of stat file created really\n> high IO - and it was problem on Amazon :( - probably we had not this\n> issue elsewhere\n\nUh. You realize that if you set stats_temp_dir, it only ever writes to\nthe persistent storage once, when you do \"pg_ctl stop\" (or\nequivalent). Are you saying the shutdown took too long?\n\nI've had to change that param many times on Amazon, but I've never had\na problem with the shutdown writes. (And I've seen it often enough on\ndedicated hardware as well..)\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n",
"msg_date": "Mon, 6 Aug 2012 16:18:49 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "2012/8/6 Magnus Hagander <[email protected]>:\n> On Mon, Aug 6, 2012 at 4:16 PM, Pavel Stehule <[email protected]> wrote:\n>> 2012/8/6 Magnus Hagander <[email protected]>:\n>>> That's not a good way of doing it, since you loose persistent storage.\n>>>\n>>> Instead, you should set the stats_temp_dir paramter to a filesystem\n>>> somewhere else that is tmpfs. Then PostgreSQL will automatically move\n>>> the file to and from the main data directory on startup and shutdown,\n>>> so you get both the performance of tmpfs and the persistent\n>>> statistics.\n>>\n>> we had to do it because our read/write of stat file created really\n>> high IO - and it was problem on Amazon :( - probably we had not this\n>> issue elsewhere\n>\n> Uh. You realize that if you set stats_temp_dir, it only ever writes to\n> the persistent storage once, when you do \"pg_ctl stop\" (or\n> equivalent). Are you saying the shutdown took too long?\n>\n> I've had to change that param many times on Amazon, but I've never had\n> a problem with the shutdown writes. (And I've seen it often enough on\n> dedicated hardware as well..)\naha, this is my mistake - we use a stats_temp_directory GUC already -\nmy advice was use a tmpfs for statfile, but I though realize it via\nstats_temp_directory, but I didn't say it.\n\nRegards\n\nPavel\n\n>\n> --\n> Magnus Hagander\n> Me: http://www.hagander.net/\n> Work: http://www.redpill-linpro.com/\n",
"msg_date": "Mon, 6 Aug 2012 16:43:42 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "On Fri, Jul 27, 2012 at 9:33 PM, David Barton <[email protected]> wrote:\n> Hi,\n>\n> I am running postgres 9.1.4 on Ubuntu 12.04 and the stats collector is\n> generating very high IO usage even when nothing appears to be happening on\n> the system.\n>\n> I have roughly 150 different databases, each of which is running in 1 of\n> roughly 30 tablespaces. The databases are small (the dump of most is are\n> under 100M, and all but 3 are under 1G, nothing larger than 2G).\n\nIt isn't the size of the data that matters, but the number of objects.\n It sounds like your databases have about 150 statistics-containing\nobjects each, in order to come up with a 3.5MB stats file.\n\nWhat do you gain by using databases rather than schema to do the segregation?\n\n> Previously iotop reported the disk write speed, at ~6MB / second.\n\nSo that corresponds to about 2 physical write-outs of the stats file\nper second. Are you using ext4? It has the peculiar (to me) property\nthat when a file is renamed out of existence, it writes out all of\nthat file's obsolete dirty buffers, rather than just dropping them as\nuninteresting to anyone. That generates about 10 times the physical\nIO as the ext3 file system does. And of course about infinite times\nthe physical IO as a tmpfs.\n\n> FWIW, I just migrated all these databases over to this new server by\n> restoring from pg_dump I was previously experiencing this on 8.3, which was\n> why I upgraded to 9.1 and I also have another server with similar problems\n> on 9.1.\n>\n> Any help would be sincerely appreciated.\n\nI think the first line of defense would be using /dev/shm to hold the\nstats file. I don't see any downside to that. You are reading and\nwriting that file so ferociously anyway that it is always going to be\ntaking up RAM, no matter where you put it. Indeed, under ext4 you\nmight use even have several copies of it all locked into RAM as they\nwait to reach the disk before being dropped.\n\nIncreasing the naptime, as you have already done, will also decrease\nthe physical IO, but that has the trade-off of risking bloat. (But\nsince you are running 150 databases on one machine, I doubt any of\nthem are active enough for the risk of bloat to be all that great).\nHowever using /dev/shm should eliminate the IO entirely with no\ntrade-off at all.\n\nBut with /dev/shm the CPU usage of repeatedly formatting, writing,\nreading, and parsing the stat file will still be considerable, while\nincreasing the naptime will reduce that as well.\n\nAs far as coding changes to overcome the fundamental problem:\n\nA relatively easy change would be to make any given autovacuum worker\non start up tolerate a stats file that is out of date by up to, say,\nnaptime/5. That would greatly reduce the amount of writing the stats\ncollector needs to do (assuming that few tables actually need\nvacuuming during any given cycle), but wouldn't change the amount of\nreading a worker needs to do because it still needs to read the file\neach time as it doesn't inherit the stats from anyone. I don't think\nit would be a problem that a table which becomes eligible for\nvacuuming in the last 20% of a cycle would have to wait for one more\nround. Especially as this change might motivate one to reduce the\nnaptime since doing so will be cheaper.\n\nBut it seems like maybe the stats collector could use a ground-up\nrecoding. Maybe it could use a shared relation to store the stats\nwithin the database cluster itself, so that edits could be done in\nplace per database rather than re-writing the entire cluster's stats?\n But I certainly am not volunteering to take on that task.\n\nA compromise might be to have one stats file per database. That way\nany given backend only needs to read in the database file it cares\nabout, and the stat's collector only needs to write out the one\ndatabase asked of it. This change could be mostly localized to just\npgstat.c, I think.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Sun, 12 Aug 2012 14:23:09 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "Hi Jeff,\n\nThanks for the detailed reply.\n\nOn 13/08/12 05:23, Jeff Janes wrote:\n> On Fri, Jul 27, 2012 at 9:33 PM, David Barton <[email protected]> wrote:\n>> Hi,\n>>\n>> <snip>\n>> I have roughly 150 different databases, each of which is running in 1 of\n>> roughly 30 tablespaces. The databases are small (the dump of most is are\n>> under 100M, and all but 3 are under 1G, nothing larger than 2G).\n> It isn't the size of the data that matters, but the number of objects.\n> It sounds like your databases have about 150 statistics-containing\n> objects each, in order to come up with a 3.5MB stats file.\n>\n> What do you gain by using databases rather than schema to do the segregation?\nI had never imagined that there was such a profound difference between \nusing schemas and using databases. I imagine that I could convert from \nusing databases to using schemas.\n>\n>> Previously iotop reported the disk write speed, at ~6MB / second.\n> So that corresponds to about 2 physical write-outs of the stats file\n> per second. Are you using ext4? It has the peculiar (to me) property\n> that when a file is renamed out of existence, it writes out all of\n> that file's obsolete dirty buffers, rather than just dropping them as\n> uninteresting to anyone. That generates about 10 times the physical\n> IO as the ext3 file system does. And of course about infinite times\n> the physical IO as a tmpfs.\nIt was previously on ext3 and moved to ext4. That didn't seem to make a \ndifference, I'm guessing that the higher IO on the new server was just \nthat it was capable of doing it.\n>\n>> FWIW, I just migrated all these databases over to this new server by\n>> restoring from pg_dump I was previously experiencing this on 8.3, which was\n>> why I upgraded to 9.1 and I also have another server with similar problems\n>> on 9.1.\n>>\n>> Any help would be sincerely appreciated.\n> I think the first line of defense would be using /dev/shm to hold the\n> stats file. I don't see any downside to that. You are reading and\n> writing that file so ferociously anyway that it is always going to be\n> taking up RAM, no matter where you put it. Indeed, under ext4 you\n> might use even have several copies of it all locked into RAM as they\n> wait to reach the disk before being dropped.\n>\n> Increasing the naptime, as you have already done, will also decrease\n> the physical IO, but that has the trade-off of risking bloat. (But\n> since you are running 150 databases on one machine, I doubt any of\n> them are active enough for the risk of bloat to be all that great).\n> However using /dev/shm should eliminate the IO entirely with no\n> trade-off at all.\n>\n> But with /dev/shm the CPU usage of repeatedly formatting, writing,\n> reading, and parsing the stat file will still be considerable, while\n> increasing the naptime will reduce that as well.\nThe CPU overhead seems pretty minimal, and a slight reduction in naptime \nshould be more than enough.\n>\n> As far as coding changes to overcome the fundamental problem:\n>\n> A relatively easy change would be to make any given autovacuum worker\n> on start up tolerate a stats file that is out of date by up to, say,\n> naptime/5. That would greatly reduce the amount of writing the stats\n> collector needs to do (assuming that few tables actually need\n> vacuuming during any given cycle), but wouldn't change the amount of\n> reading a worker needs to do because it still needs to read the file\n> each time as it doesn't inherit the stats from anyone. I don't think\n> it would be a problem that a table which becomes eligible for\n> vacuuming in the last 20% of a cycle would have to wait for one more\n> round. Especially as this change might motivate one to reduce the\n> naptime since doing so will be cheaper.\nIf the stats are mirrored in memory, then that makes sense. Of course, \nif that's the case then couldn't we just alter the stats to flush at \nmaximum once per N seconds / minutes? If the stats are not mirrored in \nmemory, doesn't that imply that most of the databases will never flush \nupdates stats to disk and so the file will become stale?\n>\n> But it seems like maybe the stats collector could use a ground-up\n> recoding. Maybe it could use a shared relation to store the stats\n> within the database cluster itself, so that edits could be done in\n> place per database rather than re-writing the entire cluster's stats?\n> But I certainly am not volunteering to take on that task.\n>\n> A compromise might be to have one stats file per database. That way\n> any given backend only needs to read in the database file it cares\n> about, and the stat's collector only needs to write out the one\n> database asked of it. This change could be mostly localized to just\n> pgstat.c, I think.\nThat approach was what I had thought I am not a C programmer by any \nstretch of the imagination, which is why I asked if there was a place to \nfind this kind of thing. It seems likely there are a few features that \npeople would be willing to put money towards.\n\n>\n> Cheers,\n>\n> Jeff\n>\n>\nRegards,\nDavid\n\n",
"msg_date": "Mon, 13 Aug 2012 10:17:28 +0800",
"msg_from": "David Barton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
},
{
"msg_contents": "On Sun, Aug 12, 2012 at 7:17 PM, David Barton <[email protected]> wrote:\n>>\n>> A relatively easy change would be to make any given autovacuum worker\n>> on start up tolerate a stats file that is out of date by up to, say,\n>> naptime/5. That would greatly reduce the amount of writing the stats\n>> collector needs to do (assuming that few tables actually need\n>> vacuuming during any given cycle), but wouldn't change the amount of\n>> reading a worker needs to do because it still needs to read the file\n>> each time as it doesn't inherit the stats from anyone. I don't think\n>> it would be a problem that a table which becomes eligible for\n>> vacuuming in the last 20% of a cycle would have to wait for one more\n>> round. Especially as this change might motivate one to reduce the\n>> naptime since doing so will be cheaper.\n>\n> If the stats are mirrored in memory, then that makes sense. Of course, if\n> that's the case then couldn't we just alter the stats to flush at maximum\n> once per N seconds / minutes?\n\nThe actual flushing is not under our control, but under the kernel's\ncontrol. But in essence that is what I am suggesting. If the vacuum\nworkers ask the stats collector for fresh stats less often, the stats\ncollector will write them to the kernel less often. If we are writing\nthem to the kernel less often, the kernel will flush them less often.\nThe kernel could choose to flush them less often anyway, but for some\nreason with ext4 it doesn't.\n\n> If the stats are not mirrored in memory,\n> doesn't that imply that most of the databases will never flush updates stats\n> to disk and so the file will become stale?\n\nWe don't do anything specific to cause them to be mirrored, it is just\nthat that is the way the kernel deals with frequently accessed\nfile-system data. If the kernel decided to evict the data from\nmemory, it would have to make sure it reached disk first. It is the\nkernel's job to present a consistent image of all the file-system\ndata, regardless of whether it is actually in memory, or on disk, or\nboth. If the data on disk is stale, the kernel guarantees requests to\nread it will be served from memory rather than from disk.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Mon, 13 Aug 2012 12:40:28 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.1.4 - high stats collector IO usage"
}
] |
[
{
"msg_contents": "Many of you will be aware that the behaviour of commit_delay was\nrecently changed. Now, the delay only occurs within the group commit\nleader backend, and not within each and every backend committing a\ntransaction:\n\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f11e8be3e812cdbbc139c1b4e49141378b118dee\n\nFor those of you that didn't follow this development, I should point\nout that I wrote a blogpost that described the idea, which will serve\nas a useful summary:\n\nhttp://pgeoghegan.blogspot.com/2012/06/towards-14000-write-transactions-on-my.html\n\nI made what may turn out to be a useful observation during the\ndevelopment of the patch, which was that for both the tpc-b.sql and\ninsert.sql pgbench-tools scripts, a commit_delay of half of my\nwal_sync_method's reported raw sync speed looked optimal. I use Linux,\nso my wal_sync_method happened to have been fdatasync. I measured this\nusing pg_test_fsync.\n\nThe devel docs still say of commit_delay and commit siblings: \"Good\nvalues for these parameters are not yet clear; experimentation is\nencouraged\". This has been the case since Postgres 7.1 (i.e. it has\nnever been clear what good values were - the folk wisdom was actually\nthat commit_delay should always be set to 0). I hope to be able to\nformulate some folk wisdom about setting commit_delay from 9.3 on,\nthat may go on to be accepted as an official recommendation within the\ndocs.\n\nI am rather curious as to what experimentation shows optimal values\nfor commit_delay to be for a representative cross-section of hardware.\nIn particular, I'd like to see if setting commit_delay to half of raw\nsync time appears to be optimal for both insert.sql and tpc-b.sql\nworkloads across different types of hardware with different sync\ntimes. Now, it may be sort of questionable to take those workloads as\ngeneral proxies for performance, not least since they will literally\ngive Postgres as many *completely uniform* transactions as it can\nhandle. However, it is hard to think of another, better general proxy\nfor performance that is likely to be accepted as such, and will allows\nus to reason about setting commit_delay.\n\nWhile I am not completely confident that we can formulate a widely\nuseful, simple piece of advice, I am encouraged by the fact that a\ncommit_delay of 4,000 worked very well for both tpc-b.sql and\ninsert.sql workloads on my laptop, beating out settings of 3,000 and\n5,000 on each benchmark. I am also encouraged by the fact that in some\ncases, including both the insert.sql and tpc-b.sql cases that I've\nalready described elsewhere, there is actually no downside to setting\ncommit_delay - transaction throughput naturally improves, but\ntransaction latency is actually improved a bit too (or at least the\naverage and worst-cases). This is presumably due to the amelioration\nof resource contention (from greater commit batching) more than\ncompensating for the obvious downside of adding a delay.\n\nIt would be useful, for a start, if I had numbers for a battery-backed\nwrite cache. I don't have access to one right now though, nor do I\nhave access to any more interesting hardware, which is one reason why\nI'm asking for help with this.\n\nI like to run \"sync\" prior to running pg_test_fsync, just in case.\n\n[peter@peterlaptop pg_test_fsync]$ sync\n\nI then interpret the following output:\n\n[peter@peterlaptop pg_test_fsync]$ pg_test_fsync\n2 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 112.940 ops/sec\n fdatasync 114.035 ops/sec\n fsync 21.291 ops/sec\n*** SNIP ***\n\nSo if I can perform 114.035 8KiB sync operations per second, that's an\naverage of about 1 per 8.77 milliseconds, or 8770 microseconds to put\nit in the units that commit_delay speaks. It is my hope that we will\nfind that when this number is halved, we will arrive at a figure that\nis worth recommending as a general useful setting for commit_delay for\nthe system. I guess I could gain some additional insight by simply\nchanging my wal_sync_method, but I'd find it more interesting to look\nat organic setups with faster (not slower) sync times than my system's\nfdatasync. For those who are able to help me here, I'd like to see\npgbench-tools workloads for both tpc-b.sql and insert.sql with\nincrementing values of commit_delay (increments of, say, 1000\nmicroseconds, perhaps with less granularity where it isn't needed),\nfrom 0 to $(1.5 times raw sync speed) microseconds.\n\nThanks\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n",
"msg_date": "Sun, 29 Jul 2012 16:39:21 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help me develop new commit_delay advice"
},
{
"msg_contents": "> From: [email protected]\n[mailto:[email protected]] On Behalf Of Peter Geoghegan\n> Sent: Sunday, July 29, 2012 9:09 PM\n\n\n> I made what may turn out to be a useful observation during the\n> development of the patch, which was that for both the tpc-b.sql and\n> insert.sql pgbench-tools scripts, a commit_delay of half of my\n> wal_sync_method's reported raw sync speed looked optimal. I use Linux,\n> so my wal_sync_method happened to have been fdatasync. I measured this\n> using pg_test_fsync.\n\nI have done some basic test for commit_delay parameter\nOS version: suse linux 10.3 \npostgresql version: 9.3 dev on x86-64, compiled by gcc (GCC) 4.1.2 20070115 \nMachine details: 8 core cpu, 24GB RAM. \nTestcase: pgbench tcp_b test. \n\nBefore running the benchmark suite, the buffers are loaded by using\npg_prewarm utility. \n\nTest Results are attached with this mail.\nRun1,Run2,Run3 means the same test has ran 3 times.\n\n\n> It would be useful, for a start, if I had numbers for a battery-backed\n> write cache. I don't have access to one right now though, nor do I\n> have access to any more interesting hardware, which is one reason why\n> I'm asking for help with this.\n\n> I like to run \"sync\" prior to running pg_test_fsync, just in case.\n\n> [peter@peterlaptop pg_test_fsync]$ sync\n\n>I then interpret the following output:\n\n> [peter@peterlaptop pg_test_fsync]$ pg_test_fsync\n> 2 seconds per test\n> O_DIRECT supported on this platform for open_datasync and open_sync.\n\n> Compare file sync methods using one 8kB write:\n> (in wal_sync_method preference order, except fdatasync\n> is Linux's default)\n> open_datasync 112.940 ops/sec\n> fdatasync 114.035 ops/sec\n> fsync 21.291 ops/sec\n> *** SNIP ***\n\nI shall look into this aspect also(setting commit_delay based on raw sync).\nYou also suggest if you want to run the test with different configuration.\n\nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 1 Aug 2012 19:44:45 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help me develop new commit_delay advice"
},
{
"msg_contents": "On 1 August 2012 15:14, Amit Kapila <[email protected]> wrote:\n> I shall look into this aspect also(setting commit_delay based on raw sync).\n> You also suggest if you want to run the test with different configuration.\n\nWell, I was specifically interested in testing if half of raw sync\ntime was a widely useful setting, across a variety of different,\nthough representative I/O subsystems. Unfortunately, without some\ncontext about raw sync speed to go along with your numbers, I cannot\nadvance or disprove that idea.\n\nIt would also have been nice to see a baseline number of 0 too, to get\nan idea of how effective commit_delay may now be. However, that's\nsecondary.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n",
"msg_date": "Wed, 1 Aug 2012 16:19:28 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help me develop new commit_delay advice"
},
{
"msg_contents": "> From: Peter Geoghegan [mailto:[email protected]] \n> Sent: Wednesday, August 01, 2012 8:49 PM\n\nOn 1 August 2012 15:14, Amit Kapila <[email protected]> wrote:\n>> I shall look into this aspect also(setting commit_delay based on raw\nsync).\n>> You also suggest if you want to run the test with different\nconfiguration.\n\n> Well, I was specifically interested in testing if half of raw sync\n> time was a widely useful setting, across a variety of different,\n> though representative I/O subsystems. Unfortunately, without some\n> context about raw sync speed to go along with your numbers, I cannot\n> advance or disprove that idea.\n\nRaw sync speed data\n--------------------------\n2 seconds per test \nO_DIRECT supported on this platform for open_datasync and open_sync. \n\nCompare file sync methods using one 8kB write: \n(in wal_sync_method preference order, except fdatasync \nis Linux's default) \n open_datasync n/a \n fdatasync 165.506 ops/sec \n fsync 166.647 ops/sec \n fsync_writethrough n/a \n open_sync 164.654 ops/sec \n\n165.506 * 8KB operations can perform in one sec. \nso 1 * 8KB operation takes 6.042 msec.\n\n> It would also have been nice to see a baseline number of 0 too, to get\n> an idea of how effective commit_delay may now be. However, that's\n> secondary.\n\nIn the data sent yesterday commit_delay=0 was there.\n\n\nWith Regards,\nAmit Kapila.\n\n\n\n",
"msg_date": "Thu, 2 Aug 2012 16:45:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help me develop new commit_delay advice"
},
{
"msg_contents": "Peter,\n\nFor some reason I didn't receive the beginning of this thread. Can you\nresend it to me, or (better) post it to the pgsql-performance mailing list?\n\nI have a linux system where I can test both on regular disk and on SSD.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Thu, 02 Aug 2012 10:48:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help me develop new commit_delay advice"
},
{
"msg_contents": "This has been reposted to this list from the pgsql-hackers list, at\nthe request of Josh Berkus. Hopefully there will be more interest\nhere.\n\n---------- Forwarded message ----------\nFrom: Peter Geoghegan <[email protected]>\nDate: 29 July 2012 16:39\nSubject: Help me develop new commit_delay advice\nTo: PG Hackers <[email protected]>\n\n\nMany of you will be aware that the behaviour of commit_delay was\nrecently changed. Now, the delay only occurs within the group commit\nleader backend, and not within each and every backend committing a\ntransaction:\n\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f11e8be3e812cdbbc139c1b4e49141378b118dee\n\nFor those of you that didn't follow this development, I should point\nout that I wrote a blogpost that described the idea, which will serve\nas a useful summary:\n\nhttp://pgeoghegan.blogspot.com/2012/06/towards-14000-write-transactions-on-my.html\n\nI made what may turn out to be a useful observation during the\ndevelopment of the patch, which was that for both the tpc-b.sql and\ninsert.sql pgbench-tools scripts, a commit_delay of half of my\nwal_sync_method's reported raw sync speed looked optimal. I use Linux,\nso my wal_sync_method happened to have been fdatasync. I measured this\nusing pg_test_fsync.\n\nThe devel docs still say of commit_delay and commit siblings: \"Good\nvalues for these parameters are not yet clear; experimentation is\nencouraged\". This has been the case since Postgres 7.1 (i.e. it has\nnever been clear what good values were - the folk wisdom was actually\nthat commit_delay should always be set to 0). I hope to be able to\nformulate some folk wisdom about setting commit_delay from 9.3 on,\nthat may go on to be accepted as an official recommendation within the\ndocs.\n\nI am rather curious as to what experimentation shows optimal values\nfor commit_delay to be for a representative cross-section of hardware.\nIn particular, I'd like to see if setting commit_delay to half of raw\nsync time appears to be optimal for both insert.sql and tpc-b.sql\nworkloads across different types of hardware with different sync\ntimes. Now, it may be sort of questionable to take those workloads as\ngeneral proxies for performance, not least since they will literally\ngive Postgres as many *completely uniform* transactions as it can\nhandle. However, it is hard to think of another, better general proxy\nfor performance that is likely to be accepted as such, and will allows\nus to reason about setting commit_delay.\n\nWhile I am not completely confident that we can formulate a widely\nuseful, simple piece of advice, I am encouraged by the fact that a\ncommit_delay of 4,000 worked very well for both tpc-b.sql and\ninsert.sql workloads on my laptop, beating out settings of 3,000 and\n5,000 on each benchmark. I am also encouraged by the fact that in some\ncases, including both the insert.sql and tpc-b.sql cases that I've\nalready described elsewhere, there is actually no downside to setting\ncommit_delay - transaction throughput naturally improves, but\ntransaction latency is actually improved a bit too (or at least the\naverage and worst-cases). This is presumably due to the amelioration\nof resource contention (from greater commit batching) more than\ncompensating for the obvious downside of adding a delay.\n\nIt would be useful, for a start, if I had numbers for a battery-backed\nwrite cache. I don't have access to one right now though, nor do I\nhave access to any more interesting hardware, which is one reason why\nI'm asking for help with this.\n\nI like to run \"sync\" prior to running pg_test_fsync, just in case.\n\n[peter@peterlaptop pg_test_fsync]$ sync\n\nI then interpret the following output:\n\n[peter@peterlaptop pg_test_fsync]$ pg_test_fsync\n2 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 112.940 ops/sec\n fdatasync 114.035 ops/sec\n fsync 21.291 ops/sec\n*** SNIP ***\n\nSo if I can perform 114.035 8KiB sync operations per second, that's an\naverage of about 1 per 8.77 milliseconds, or 8770 microseconds to put\nit in the units that commit_delay speaks. It is my hope that we will\nfind that when this number is halved, we will arrive at a figure that\nis worth recommending as a general useful setting for commit_delay for\nthe system. I guess I could gain some additional insight by simply\nchanging my wal_sync_method, but I'd find it more interesting to look\nat organic setups with faster (not slower) sync times than my system's\nfdatasync. For those who are able to help me here, I'd like to see\npgbench-tools workloads for both tpc-b.sql and insert.sql with\nincrementing values of commit_delay (increments of, say, 1000\nmicroseconds, perhaps with less granularity where it isn't needed),\nfrom 0 to $(1.5 times raw sync speed) microseconds.\n\nThanks\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n",
"msg_date": "Thu, 2 Aug 2012 19:02:33 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "[repost] Help me develop new commit_delay advice"
},
{
"msg_contents": "On 29 July 2012 16:39, Peter Geoghegan <[email protected]> wrote:\n> Many of you will be aware that the behaviour of commit_delay was\n> recently changed. Now, the delay only occurs within the group commit\n> leader backend, and not within each and every backend committing a\n> transaction:\n\nI've moved this to the pgsql-performance list. Please continue the\ndiscussion there.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n",
"msg_date": "Thu, 2 Aug 2012 19:04:51 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help me develop new commit_delay advice"
},
{
"msg_contents": "On 08/02/2012 02:02 PM, Peter Geoghegan wrote:\n> I made what may turn out to be a useful observation during the\n> development of the patch, which was that for both the tpc-b.sql and\n> insert.sql pgbench-tools scripts, a commit_delay of half of my\n> wal_sync_method's reported raw sync speed looked optimal.\n\nI dug up what I wrote when trying to provide better advice for this \ncirca V8.3. That never really gelled into something worth publishing at \nthe time. But I see some similar patterns what what you're reporting, \nso maybe this will be useful input to you now. That included a 7200RPM \ndrive and a system with a BBWC.\n\nIn the BBWC case, the only useful tuning I found was to add a very small \namount of commit_delay, possibly increasing the siblings too. I was \nusing http://benjiyork.com/blog/2007/04/sleep-considered-harmful.html to \nfigure out the minimum sleep resolution on the server (3us at the time) \nand setting commit_delay to that; then increasing commit_siblings to 10 \nor 20. Jignesh Shah came back with something in the same sort of range \nthen at \nhttp://jkshah.blogspot.com/2007/07/specjappserver2004-and-postgresql_09.html \n, setting commit_delay=10.\n\nOn the 7200RPM drive ~= 115 TPS, 1/2 of the drive's rotation was \nconsistently what worked best for me across multiple tests too. I also \nfound lowering commit_siblings all the way to 1 could significantly \nimprove the 2 client case when you did that. Here's my notes from then:\n\ncommit_delay=4500, commit_siblings=1: By waiting 1/2 a revolution if \nthere's another active transaction, I get a small improvement at the \nlow-end (around an extra 20 TPS between 2 and 6 clients), while not \ndoing much damage to the higher client loads. This might\nbe a useful tuning if your expected number of active clients are low, \nyou don't have a good caching controller, but you'd like a little more \noomph out of things. The results for 7000 usec were almost as good. \nBut in general, if you're stuck choosing between two commit_delay values \nyou should use the smaller one as it will be less likely to have a bad \nimpact on low client loads.\n\nI also found considering a high delay only when a lot of clients were \nusually involved worked a bit better than a 1/2 rotation:\n\ncommit_delay=10000, commit_siblings=20: At higher client loads, there's \nalmost invariably another commit coming right behind yours if you wait a \nbit. Just plan to wait a bit more than an entire rotation between \ncommits. This buys me about an extra 30 TPS on the high client loads, \nwhich is a small fraction of an improvement (<5%) but might be worthwhile.\n\nThe fact that it seemed the optimal delay needed to vary a bit based on \nthe number of the siblings was one of the challenges I kept butting into \nthen. Making the GUC settings even more complicated for this doesn't \nseem a productive step forward for the average user.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 05 Sep 2012 23:20:29 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [repost] Help me develop new commit_delay advice"
},
{
"msg_contents": "On 6 September 2012 04:20, Greg Smith <[email protected]> wrote:\n> On 08/02/2012 02:02 PM, Peter Geoghegan wrote:\n> I dug up what I wrote when trying to provide better advice for this circa\n> V8.3. That never really gelled into something worth publishing at the time.\n> But I see some similar patterns what what you're reporting, so maybe this\n> will be useful input to you now. That included a 7200RPM drive and a system\n> with a BBWC.\n\nSo, did either Josh or Greg ever get as far as producing numbers for\ndrives with faster fsyncs than the ~8,000 us fsync speed of my\nlaptop's disk?\n\nI'd really like to be able to make a firm recommendation as to how\ncommit_delay should be set, and have been having a hard time beating\nthe half raw-sync time recommendation, even with a relatively narrow\nbenchmark (that is, the alernative pgbench-tools scripts). My\nobservation is that it is generally better to ameliorate the risk of\nincreased latency through a higher commit_siblings setting rather than\nthrough a lower commit_delay (though it would be easy to overdo it -\ncommit_delay can now be thought of as a way to bring the benefits of\ngroup commit to workloads that could in principle benefit, but would\notherwise not benefit much from it, such as workloads with lots of\nsmall writes but not too many clients).\n\nOne idea I had, which is really more -hackers material, was to test if\nbackends with a transaction are inCommit (that's a PGXACT field),\nrather than just having a transaction, within MinimumActiveBackends().\nThe idea is that commit_siblings would represent the number of\nbackends imminently committing needed to delay, rather than the number\nof backends in a transaction. It is far from clear that that's a good\nidea, but that's perhaps just because the pgbench infrastructure is a\npoor proxy for real workloads, with variable sized transactions.\nPretty much all pgbench transactions commit imminently anyway.\n\nAnother idea which I have lost faith in - because it has been hard to\nprove that client count is really relevant - was the notion that\ncommit_delay should be a dynamically adapting function of the client\n(with transactions) count. Setting commit_delay to 1/2 raw sync time\nappears optimal at any client count that is > 1. The effect at 2\nclients is quite noticeable.\n\nI have a rather busy schedule right now, and cannot spend too many\nmore cycles on this. I'd like to reach a consensus on this soon. Just\ngiving the 1/2 raw sync time the official blessing of being included\nin the docs should be the least we do, though. It is okay if the\nwording is a bit equivocal - that has to be better than the current\nadvice, which is (to paraphrase) \"we don't really have a clue; you\ntell us\".\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n",
"msg_date": "Mon, 8 Oct 2012 13:38:47 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [repost] Help me develop new commit_delay advice"
}
] |
[
{
"msg_contents": "Using PG 9.0 and given 2 queries (messageq_current is a view on the messageq_table):\n\nselect entity_id from messageq_current\nwhere entity_id = 123456;\n\nselect entity_id from messageq_current\nwhere incoming = true\nand inactive = false\nand staff_ty = 2\nand staff_id = 2\norder by entity_id desc\nlimit 1;\n\nand 2 indexes (there are 15 indexes in total but they are left out here for brevity):\n\nmessageq1:\nCREATE INDEX messageq1\n ON messageq_table\n USING btree\n (entity_id);\n\nAnd messageq4:\n\nCREATE INDEX messageq4\n ON messageq_table\n USING btree\n (inactive, staff_ty, staff_id, incoming, tran_dt);\n\nWith the messageq1 index present, query 1 is very quick (0.094ms) and query 2 is very slow (241.515ms).\nIf I remove messageq1 then query 2 uses messageq4 and is very quick (0.098ms) but then query 1 must use a different index and is therefore slower (> 5ms).\n\nSo, to the Query plans:\nWith messageq1:\n\"Limit (cost=0.00..2670.50 rows=1 width=4) (actual time=241.481..241.481 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Buffers: shared hit=32 read=18870 written=12\"\n\" -> Index Scan Backward using messageq1 on prac_live_10112.messageq_table (cost=0.00..66762.53 rows=25 width=4) (actual time=241.479..241.479 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Filter: (messageq_table.incoming AND (NOT messageq_table.inactive) AND (messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2) AND (aud_status_to_flag(messageq_table.aud_status) = 1))\"\n\" Buffers: shared hit=32 read=18870 written=12\"\n\"Total runtime: 241.515 ms\"\n\nWithout messageq1:\n\"Limit (cost=12534.45..12534.45 rows=1 width=4) (actual time=0.055..0.055 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Buffers: shared read=3\"\n\" -> Sort (cost=12534.45..12534.51 rows=25 width=4) (actual time=0.054..0.054 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Sort Key: messageq_table.entity_id\"\n\" Sort Method: quicksort Memory: 17kB\"\n\" -> Bitmap Heap Scan on prac_live_10112.messageq_table (cost=174.09..12534.32 rows=25 width=4) (actual time=0.043..0.043 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Recheck Cond: ((messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2))\"\n\" Filter: (messageq_table.incoming AND (NOT messageq_table.inactive) AND (aud_status_to_flag(messageq_table.aud_status) = 1))\"\n\" Buffers: shared read=3\"\n\" -> Bitmap Index Scan on messageq4 (cost=0.00..174.08 rows=4920 width=0) (actual time=0.040..0.040 rows=0 loops=1)\"\n\" Index Cond: ((messageq_table.inactive = false) AND (messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2) AND (messageq_table.incoming = true))\"\n\" Buffers: shared read=3\"\n\"Total runtime: 0.098 ms\"\n\nClearly the statistics are off somehow but I really don't know where to start.\n\nAny help you can give me would be very much appreciated.\n\nRegards,\n\n\nRussell Keane\n\nINPS\n\n\nTel: +44 (0)20 7501 7277\n\nSubscribe to the Vision e-newsletter<http://www.inps4.co.uk/news/enewsletter/>\nSubscribe to the Helpline Support Bulletin<http://www.inps4.co.uk/my_vision/helpline/support-bulletins>\n[cid:[email protected]] Subscribe to the Helpline Blog RSS Feed<http://www.inps4.co.uk/rss/helplineblog.rss>\n\n\n________________________________\nRegistered name: In Practice Systems Ltd.\nRegistered address: The Bread Factory, 1a Broughton Street, London, SW8 3QJ\nRegistered Number: 1788577\nRegistered in England\nVisit our Internet Web site at www.inps.co.uk\nThe information in this internet email is confidential and is intended solely for the addressee. Access, copying or re-use of information in it by anyone else is not authorised. Any views or opinions presented are solely those of the author and do not necessarily represent those of INPS or any of its affiliates. If you are not the intended recipient please contact [email protected]",
"msg_date": "Thu, 2 Aug 2012 15:54:50 +0100",
"msg_from": "Russell Keane <[email protected]>",
"msg_from_op": true,
"msg_subject": "query using incorrect index"
},
{
"msg_contents": "Russell Keane <[email protected]> wrote:\n \n> Clearly the statistics are off somehow but I really don't know\n> where to start.\n> \n> Any help you can give me would be very much appreciated.\n \nIt would help to know your more about your hardware and PostgreSQL\nconfiguration. The latter can probably best be communicated by\ncopy/paste of the results of the query on this page:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \nCan you also post the EXPLAIN ANALYZE output for the slow query with\nboth indexes present but without the LIMIT clause?\n \n-Kevin\n",
"msg_date": "Thu, 02 Aug 2012 15:13:01 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query using incorrect index"
},
{
"msg_contents": "On Thu, Aug 2, 2012 at 4:54 PM, Russell Keane <[email protected]>wrote:\n\n> ** **\n>\n> Using PG 9.0 and given 2 queries (messageq_current is a view on the\n> messageq_table):****\n>\n> ** **\n>\n> select entity_id from messageq_current****\n>\n> where entity_id = 123456;****\n>\n> ** **\n>\n> select entity_id from messageq_current****\n>\n> where incoming = true****\n>\n> and inactive = false****\n>\n> and staff_ty = 2****\n>\n> and staff_id = 2****\n>\n> order by entity_id desc****\n>\n> limit 1;****\n>\n> ** **\n>\n> and 2 indexes (there are 15 indexes in total but they are left out here\n> for brevity):****\n>\n> ** **\n>\n> messageq1:****\n>\n> CREATE INDEX messageq1****\n>\n> ON messageq_table****\n>\n> USING btree****\n>\n> (entity_id);****\n>\n> ** **\n>\n> And messageq4:****\n>\n> ** **\n>\n> CREATE INDEX messageq4****\n>\n> ON messageq_table****\n>\n> USING btree****\n>\n> (inactive, staff_ty, staff_id, incoming, tran_dt);****\n>\n> **\n>\n\nOf course *a lot* of detail is missing (full schema of table, all the other\nindexes) but with \"inactive\" a boolean column I suspect selectivity might\nnot be too good here and so having it as a first column in a covering index\nis at least questionable. If query 2 is frequent you might also want to\nconsider creating a partial index only on (staff_ty, staff_id) with\nfiltering criteria on incoming and active as present in query 2.\n\nBtw, why don't you formulate query 2 as max query?\n\nselect max(entity_id) as entity_id\n\nfrom messageq_current\n\nwhere incoming = true\n\nand inactive = false\n\nand staff_ty = 2\n\nand staff_id = 2;\n\n\n> **\n>\n> With the messageq1 index present, query 1 is very quick (0.094ms) and\n> query 2 is very slow (241.515ms).****\n>\n> If I remove messageq1 then query 2 uses messageq4 and is very quick\n> (0.098ms) but then query 1 must use a different index and is therefore\n> slower (> 5ms).****\n>\n> ** **\n>\n> So, to the Query plans:****\n>\n\nOf which query? Shouldn't there be four plans in total? I'd post plans\nhere:\nhttp://explain.depesz.com/\n\n\n> With messageq1:****\n>\n> \"Limit (cost=0.00..2670.50 rows=1 width=4) (actual time=241.481..241.481\n> rows=0 loops=1)\"****\n>\n> \" Output: messageq_table.entity_id\"****\n>\n> \" Buffers: shared hit=32 read=18870 written=12\"****\n>\n> \" -> Index Scan Backward using messageq1 on\n> prac_live_10112.messageq_table (cost=0.00..66762.53 rows=25 width=4)\n> (actual time=241.479..241.479 rows=0 loops=1)\"****\n>\n> \" Output: messageq_table.entity_id\"****\n>\n> \" Filter: (messageq_table.incoming AND (NOT\n> messageq_table.inactive) AND (messageq_table.staff_ty = 2) AND\n> (messageq_table.staff_id = 2) AND\n> (aud_status_to_flag(messageq_table.aud_status) = 1))\"****\n>\n> \" Buffers: shared hit=32 read=18870 written=12\"****\n>\n> \"Total runtime: 241.515 ms\"****\n>\n> ** **\n>\n> Without messageq1:****\n>\n> \"Limit (cost=12534.45..12534.45 rows=1 width=4) (actual time=0.055..0.055\n> rows=0 loops=1)\"****\n>\n> \" Output: messageq_table.entity_id\"****\n>\n> \" Buffers: shared read=3\"****\n>\n> \" -> Sort (cost=12534.45..12534.51 rows=25 width=4) (actual\n> time=0.054..0.054 rows=0 loops=1)\"****\n>\n> \" Output: messageq_table.entity_id\"****\n>\n> \" Sort Key: messageq_table.entity_id\"****\n>\n> \" Sort Method: quicksort Memory: 17kB\"****\n>\n> \" -> Bitmap Heap Scan on prac_live_10112.messageq_table\n> (cost=174.09..12534.32 rows=25 width=4) (actual time=0.043..0.043 rows=0\n> loops=1)\"****\n>\n> \" Output: messageq_table.entity_id\"****\n>\n> \" Recheck Cond: ((messageq_table.staff_ty = 2) AND\n> (messageq_table.staff_id = 2))\"****\n>\n> \" Filter: (messageq_table.incoming AND (NOT\n> messageq_table.inactive) AND (aud_status_to_flag(messageq_table.aud_status)\n> = 1))\"****\n>\n> \" Buffers: shared read=3\"****\n>\n> \" -> Bitmap Index Scan on messageq4 (cost=0.00..174.08\n> rows=4920 width=0) (actual time=0.040..0.040 rows=0 loops=1)\"****\n>\n> \" Index Cond: ((messageq_table.inactive = false) AND\n> (messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2) AND\n> (messageq_table.incoming = true))\"****\n>\n> \" Buffers: shared read=3\"****\n>\n> \"Total runtime: 0.098 ms\"****\n>\n> ** **\n>\n> Clearly the statistics are off somehow but I really don’t know where to\n> start.****\n>\n> ** **\n>\n> Any help you can give me would be very much appreciated.****\n>\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n\nOn Thu, Aug 2, 2012 at 4:54 PM, Russell Keane <[email protected]> wrote:\n\n\n \nUsing PG 9.0 and given 2 queries (messageq_current is a view on the messageq_table):\n \nselect entity_id from messageq_current\nwhere entity_id = 123456;\n \nselect entity_id from messageq_current\nwhere incoming = true\nand inactive = false\nand staff_ty = 2\nand staff_id = 2\norder by entity_id desc\nlimit 1;\n \nand 2 indexes (there are 15 indexes in total but they are left out here for brevity):\n \nmessageq1:\nCREATE INDEX messageq1\n ON messageq_table\n USING btree\n (entity_id);\n \nAnd messageq4:\n \nCREATE INDEX messageq4\n ON messageq_table\n USING btree\n (inactive, staff_ty, staff_id, incoming, tran_dt);\nOf course a lot of detail is missing (full schema of table, all the other indexes) but with \"inactive\" a boolean column I suspect selectivity might not be too good here and so having it as a first column in a covering index is at least questionable. If query 2 is frequent you might also want to consider creating a partial index only on (staff_ty, staff_id) with filtering criteria on incoming and active as present in query 2.\nBtw, why don't you formulate query 2 as max query?select max(entity_id) as entity_idfrom messageq_current\nwhere incoming = true\nand inactive = false\nand staff_ty = 2\nand staff_id = 2; \n \nWith the messageq1 index present, query 1 is very quick (0.094ms) and query 2 is very slow (241.515ms).\nIf I remove messageq1 then query 2 uses messageq4 and is very quick (0.098ms) but then query 1 must use a different index and is therefore slower (> 5ms).\n \nSo, to the Query plans:Of which query? Shouldn't there be four plans in total? I'd post plans here:http://explain.depesz.com/\n \nWith messageq1:\n\"Limit (cost=0.00..2670.50 rows=1 width=4) (actual time=241.481..241.481 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Buffers: shared hit=32 read=18870 written=12\"\n\" -> Index Scan Backward using messageq1 on prac_live_10112.messageq_table (cost=0.00..66762.53 rows=25 width=4) (actual time=241.479..241.479 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Filter: (messageq_table.incoming AND (NOT messageq_table.inactive) AND (messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2) AND (aud_status_to_flag(messageq_table.aud_status) = 1))\"\n\" Buffers: shared hit=32 read=18870 written=12\"\n\"Total runtime: 241.515 ms\"\n \nWithout messageq1:\n\"Limit (cost=12534.45..12534.45 rows=1 width=4) (actual time=0.055..0.055 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Buffers: shared read=3\"\n\" -> Sort (cost=12534.45..12534.51 rows=25 width=4) (actual time=0.054..0.054 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Sort Key: messageq_table.entity_id\"\n\" Sort Method: quicksort Memory: 17kB\"\n\" -> Bitmap Heap Scan on prac_live_10112.messageq_table (cost=174.09..12534.32 rows=25 width=4) (actual time=0.043..0.043 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Recheck Cond: ((messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2))\"\n\" Filter: (messageq_table.incoming AND (NOT messageq_table.inactive) AND (aud_status_to_flag(messageq_table.aud_status) = 1))\"\n\" Buffers: shared read=3\"\n\" -> Bitmap Index Scan on messageq4 (cost=0.00..174.08 rows=4920 width=0) (actual time=0.040..0.040 rows=0 loops=1)\"\n\" Index Cond: ((messageq_table.inactive = false) AND (messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2) AND (messageq_table.incoming = true))\"\n\" Buffers: shared read=3\"\n\"Total runtime: 0.098 ms\"\n \nClearly the statistics are off somehow but I really don’t know where to start.\n \nAny help you can give me would be very much appreciated.\nKind regardsrobert-- remember.guy do |as, often| as.you_can - without endhttp://blog.rubybestpractices.com/",
"msg_date": "Fri, 3 Aug 2012 11:18:20 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query using incorrect index"
},
{
"msg_contents": "You're right, a lot of information is missing but I'm unsure that the other information will make too much difference.\nI could drop all the other indexes on the table which aren't used here and the queries would still use the indexes they are currently using.\n\nI appreciate the idea that a boolean column selectivity might not be great. I've just tried creating indexes as follows:\nCREATE INDEX messageq17\n ON messageq_table\n USING btree\n (staff_ty, staff_id, incoming, inactive, entity_id);\n\nCREATE INDEX messageq18\n ON messageq_table\n USING btree\n (staff_ty, staff_id);\n\nWhen running query 2 as it stands the same thing happens, it still uses the messageq1 index.\n\nThe query is logically the same as using max, you are correct, but it's generated on the fly so the limit or the queried column may change.\n\nThe query plans were for the second query as I'm unsure that the first query is really relevant, it was simply there to justify the messageq1 index.\n\nThanks,\n\nFrom: Robert Klemme [mailto:[email protected]]\nSent: 03 August 2012 10:18\nTo: Russell Keane; pgsql-performance\nSubject: Re: [PERFORM] query using incorrect index\n\n\nOn Thu, Aug 2, 2012 at 4:54 PM, Russell Keane <[email protected]<mailto:[email protected]>> wrote:\n\nUsing PG 9.0 and given 2 queries (messageq_current is a view on the messageq_table):\n\nselect entity_id from messageq_current\nwhere entity_id = 123456;\n\nselect entity_id from messageq_current\nwhere incoming = true\nand inactive = false\nand staff_ty = 2\nand staff_id = 2\norder by entity_id desc\nlimit 1;\n\nand 2 indexes (there are 15 indexes in total but they are left out here for brevity):\n\nmessageq1:\nCREATE INDEX messageq1\n ON messageq_table\n USING btree\n (entity_id);\n\nAnd messageq4:\n\nCREATE INDEX messageq4\n ON messageq_table\n USING btree\n (inactive, staff_ty, staff_id, incoming, tran_dt);\n\nOf course a lot of detail is missing (full schema of table, all the other indexes) but with \"inactive\" a boolean column I suspect selectivity might not be too good here and so having it as a first column in a covering index is at least questionable. If query 2 is frequent you might also want to consider creating a partial index only on (staff_ty, staff_id) with filtering criteria on incoming and active as present in query 2.\n\nBtw, why don't you formulate query 2 as max query?\nselect max(entity_id) as entity_id\nfrom messageq_current\nwhere incoming = true\nand inactive = false\nand staff_ty = 2\nand staff_id = 2;\n\n\nWith the messageq1 index present, query 1 is very quick (0.094ms) and query 2 is very slow (241.515ms).\nIf I remove messageq1 then query 2 uses messageq4 and is very quick (0.098ms) but then query 1 must use a different index and is therefore slower (> 5ms).\n\nSo, to the Query plans:\n\nOf which query? Shouldn't there be four plans in total? I'd post plans here:\nhttp://explain.depesz.com/\n\nWith messageq1:\n\"Limit (cost=0.00..2670.50 rows=1 width=4) (actual time=241.481..241.481 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Buffers: shared hit=32 read=18870 written=12\"\n\" -> Index Scan Backward using messageq1 on prac_live_10112.messageq_table (cost=0.00..66762.53 rows=25 width=4) (actual time=241.479..241.479 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Filter: (messageq_table.incoming AND (NOT messageq_table.inactive) AND (messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2) AND (aud_status_to_flag(messageq_table.aud_status) = 1))\"\n\" Buffers: shared hit=32 read=18870 written=12\"\n\"Total runtime: 241.515 ms\"\n\nWithout messageq1:\n\"Limit (cost=12534.45..12534.45 rows=1 width=4) (actual time=0.055..0.055 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Buffers: shared read=3\"\n\" -> Sort (cost=12534.45..12534.51 rows=25 width=4) (actual time=0.054..0.054 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Sort Key: messageq_table.entity_id\"\n\" Sort Method: quicksort Memory: 17kB\"\n\" -> Bitmap Heap Scan on prac_live_10112.messageq_table (cost=174.09..12534.32 rows=25 width=4) (actual time=0.043..0.043 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Recheck Cond: ((messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2))\"\n\" Filter: (messageq_table.incoming AND (NOT messageq_table.inactive) AND (aud_status_to_flag(messageq_table.aud_status) = 1))\"\n\" Buffers: shared read=3\"\n\" -> Bitmap Index Scan on messageq4 (cost=0.00..174.08 rows=4920 width=0) (actual time=0.040..0.040 rows=0 loops=1)\"\n\" Index Cond: ((messageq_table.inactive = false) AND (messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2) AND (messageq_table.incoming = true))\"\n\" Buffers: shared read=3\"\n\"Total runtime: 0.098 ms\"\n\nClearly the statistics are off somehow but I really don't know where to start.\n\nAny help you can give me would be very much appreciated.\n\nKind regards\n\nrobert\n\n--\nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n\nYou’re right, a lot of information is missing but I’m unsure that the other information will make too much difference.I could drop all the other indexes on the table which aren’t used here and the queries would still use the indexes they are currently using. I appreciate the idea that a boolean column selectivity might not be great. I’ve just tried creating indexes as follows:CREATE INDEX messageq17 ON messageq_table USING btree (staff_ty, staff_id, incoming, inactive, entity_id); CREATE INDEX messageq18 ON messageq_table USING btree (staff_ty, staff_id); When running query 2 as it stands the same thing happens, it still uses the messageq1 index. The query is logically the same as using max, you are correct, but it’s generated on the fly so the limit or the queried column may change. The query plans were for the second query as I’m unsure that the first query is really relevant, it was simply there to justify the messageq1 index. Thanks, From: Robert Klemme [mailto:[email protected]] Sent: 03 August 2012 10:18To: Russell Keane; pgsql-performanceSubject: Re: [PERFORM] query using incorrect index On Thu, Aug 2, 2012 at 4:54 PM, Russell Keane <[email protected]> wrote: Using PG 9.0 and given 2 queries (messageq_current is a view on the messageq_table): select entity_id from messageq_currentwhere entity_id = 123456; select entity_id from messageq_currentwhere incoming = trueand inactive = falseand staff_ty = 2and staff_id = 2order by entity_id desclimit 1; and 2 indexes (there are 15 indexes in total but they are left out here for brevity): messageq1:CREATE INDEX messageq1 ON messageq_table USING btree (entity_id); And messageq4: CREATE INDEX messageq4 ON messageq_table USING btree (inactive, staff_ty, staff_id, incoming, tran_dt);Of course a lot of detail is missing (full schema of table, all the other indexes) but with \"inactive\" a boolean column I suspect selectivity might not be too good here and so having it as a first column in a covering index is at least questionable. If query 2 is frequent you might also want to consider creating a partial index only on (staff_ty, staff_id) with filtering criteria on incoming and active as present in query 2.Btw, why don't you formulate query 2 as max query?select max(entity_id) as entity_idfrom messageq_currentwhere incoming = trueand inactive = falseand staff_ty = 2and staff_id = 2; With the messageq1 index present, query 1 is very quick (0.094ms) and query 2 is very slow (241.515ms).If I remove messageq1 then query 2 uses messageq4 and is very quick (0.098ms) but then query 1 must use a different index and is therefore slower (> 5ms). So, to the Query plans:Of which query? Shouldn't there be four plans in total? I'd post plans here:http://explain.depesz.com/ With messageq1:\"Limit (cost=0.00..2670.50 rows=1 width=4) (actual time=241.481..241.481 rows=0 loops=1)\"\" Output: messageq_table.entity_id\"\" Buffers: shared hit=32 read=18870 written=12\"\" -> Index Scan Backward using messageq1 on prac_live_10112.messageq_table (cost=0.00..66762.53 rows=25 width=4) (actual time=241.479..241.479 rows=0 loops=1)\"\" Output: messageq_table.entity_id\"\" Filter: (messageq_table.incoming AND (NOT messageq_table.inactive) AND (messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2) AND (aud_status_to_flag(messageq_table.aud_status) = 1))\"\" Buffers: shared hit=32 read=18870 written=12\"\"Total runtime: 241.515 ms\" Without messageq1:\"Limit (cost=12534.45..12534.45 rows=1 width=4) (actual time=0.055..0.055 rows=0 loops=1)\"\" Output: messageq_table.entity_id\"\" Buffers: shared read=3\"\" -> Sort (cost=12534.45..12534.51 rows=25 width=4) (actual time=0.054..0.054 rows=0 loops=1)\"\" Output: messageq_table.entity_id\"\" Sort Key: messageq_table.entity_id\"\" Sort Method: quicksort Memory: 17kB\"\" -> Bitmap Heap Scan on prac_live_10112.messageq_table (cost=174.09..12534.32 rows=25 width=4) (actual time=0.043..0.043 rows=0 loops=1)\"\" Output: messageq_table.entity_id\"\" Recheck Cond: ((messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2))\"\" Filter: (messageq_table.incoming AND (NOT messageq_table.inactive) AND (aud_status_to_flag(messageq_table.aud_status) = 1))\"\" Buffers: shared read=3\"\" -> Bitmap Index Scan on messageq4 (cost=0.00..174.08 rows=4920 width=0) (actual time=0.040..0.040 rows=0 loops=1)\"\" Index Cond: ((messageq_table.inactive = false) AND (messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2) AND (messageq_table.incoming = true))\"\" Buffers: shared read=3\"\"Total runtime: 0.098 ms\" Clearly the statistics are off somehow but I really don’t know where to start. Any help you can give me would be very much appreciated.Kind regardsrobert-- remember.guy do |as, often| as.you_can - without endhttp://blog.rubybestpractices.com/",
"msg_date": "Fri, 3 Aug 2012 10:50:02 +0100",
"msg_from": "Russell Keane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query using incorrect index"
},
{
"msg_contents": "Settings query:\n\"version\";\"PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 32-bit\"\n\"bytea_output\";\"escape\"\n\"client_encoding\";\"UNICODE\"\n\"lc_collate\";\"English_United Kingdom.1252\"\n\"lc_ctype\";\"English_United Kingdom.1252\"\n\"listen_addresses\";\"*\"\n\"log_destination\";\"stderr\"\n\"log_duration\";\"off\"\n\"log_line_prefix\";\"%t \"\n\"log_min_duration_statement\";\"1ms\"\n\"log_statement\";\"none\"\n\"logging_collector\";\"on\"\n\"max_connections\";\"100\"\n\"max_stack_depth\";\"2MB\"\n\"port\";\"5433\"\n\"search_path\";\"prac_live_10112, prac_shared_10112, global\"\n\"server_encoding\";\"UTF8\"\n\"shared_buffers\";\"32MB\"\n\"TimeZone\";\"Europe/London\"\n\"work_mem\";\"1MB\"\n\nHardware:\nIt's important to note that this is a (purposely) low spec development machine but the performance story is a similar one on our test setup which is a lot closer to our live environment. (I'm in the process of getting figures on this).\nE8400 Core 2 Duo (2.99GHz)\n4GB ram\nxp (latest sp and all updates)\n1 300GB SATA2 drive with 170 GB free space\n\nExplain analyse with both indexes present but without the limit (uses the correct index):\n\n\"Sort (cost=12534.90..12534.97 rows=25 width=4) (actual time=0.055..0.055 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Sort Key: messageq_table.entity_id\"\n\" Sort Method: quicksort Memory: 17kB\"\n\" Buffers: shared read=3\"\n\" -> Bitmap Heap Scan on prac_live_10112.messageq_table (cost=174.09..12534.32 rows=25 width=4) (actual time=0.040..0.040 rows=0 loops=1)\"\n\" Output: messageq_table.entity_id\"\n\" Recheck Cond: ((messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2))\"\n\" Filter: (messageq_table.incoming AND (NOT messageq_table.inactive) AND (aud_status_to_flag(messageq_table.aud_status) = 1))\"\n\" Buffers: shared read=3\"\n\" -> Bitmap Index Scan on messageq4 (cost=0.00..174.08 rows=4920 width=0) (actual time=0.037..0.037 rows=0 loops=1)\"\n\" Index Cond: ((messageq_table.inactive = false) AND (messageq_table.staff_ty = 2) AND (messageq_table.staff_id = 2) AND (messageq_table.incoming = true))\"\n\" Buffers: shared read=3\"\n\"Total runtime: 0.092 ms\"\n\n\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: 02 August 2012 21:13\nTo: Russell Keane; [email protected]\nSubject: Re: [PERFORM] query using incorrect index\n\nRussell Keane <[email protected]> wrote:\n \n> Clearly the statistics are off somehow but I really don't know where \n> to start.\n> \n> Any help you can give me would be very much appreciated.\n \nIt would help to know your more about your hardware and PostgreSQL configuration. The latter can probably best be communicated by copy/paste of the results of the query on this page:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \nCan you also post the EXPLAIN ANALYZE output for the slow query with both indexes present but without the LIMIT clause?\n \n-Kevin\n",
"msg_date": "Fri, 3 Aug 2012 11:00:29 +0100",
"msg_from": "Russell Keane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query using incorrect index"
},
{
"msg_contents": "Russell Keane <[email protected]> wrote:\n \n> \"log_min_duration_statement\";\"1ms\"\n \n> \"shared_buffers\";\"32MB\"\n> \"work_mem\";\"1MB\"\n \nThose are pretty low values even for a 4GB machine. I suggest the\nfollowing changes and additions, based on the fact that you seem to\nhave the active portion of the database fully cached.\n \nshared_buffers = '160MB'\nwork_mem = '8MB'\nseq_page_cost = 0.1\nrandom_page_cost = 0.1\ncpu_tuple_cost = 0.03\neffective_cache_size = '2GB'\n \n> Explain analyse with both indexes present but without the limit\n> (uses the correct index):\n \n> \"Total runtime: 0.092 ms\"\n \nPart of problem is that it thinks it will find a matching row fairly\nquickly, and having done so using the index it chose will mean it is\nthe *right* row. The problem is that there are no matching rows, so\nit has to scan the entire index. More fine-grained statistics\n*might* help. If other techniques don't help, you can rewrite the\nquery slightly to create an optimization fence, but that should be a\nlast resort. I agree with Robert that if you have a lot of queries\nthat select on \"incoming\" and/or \"inactive\", a conditional index\n(with a WHERE clause in its definition) is likely to be very\nhelpful.\n \n-Kevin\n",
"msg_date": "Fri, 03 Aug 2012 09:33:31 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query using incorrect index"
},
{
"msg_contents": "I tried creating the following index:\n\nCREATE INDEX messageq17\n ON messageq_table\n USING btree\n (staff_ty, staff_id, entity_id)\n WHERE inactive = false;\n\n'inactive = false' (active would be much easy but this is legacy) records should make up a smaller proportion of the overall dataset (and much more of the queries will specify this clause) and the results are very promising.\n\nI will also try changing the settings and report back.\n\nThanks again guys,\n\n\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: 03 August 2012 15:34\nTo: Russell Keane; [email protected]\nSubject: Re: [PERFORM] query using incorrect index\n\nRussell Keane <[email protected]> wrote:\n \n> \"log_min_duration_statement\";\"1ms\"\n \n> \"shared_buffers\";\"32MB\"\n> \"work_mem\";\"1MB\"\n \nThose are pretty low values even for a 4GB machine. I suggest the following changes and additions, based on the fact that you seem to have the active portion of the database fully cached.\n \nshared_buffers = '160MB'\nwork_mem = '8MB'\nseq_page_cost = 0.1\nrandom_page_cost = 0.1\ncpu_tuple_cost = 0.03\neffective_cache_size = '2GB'\n \n> Explain analyse with both indexes present but without the limit (uses \n> the correct index):\n \n> \"Total runtime: 0.092 ms\"\n \nPart of problem is that it thinks it will find a matching row fairly quickly, and having done so using the index it chose will mean it is the *right* row. The problem is that there are no matching rows, so it has to scan the entire index. More fine-grained statistics\n*might* help. If other techniques don't help, you can rewrite the query slightly to create an optimization fence, but that should be a last resort. I agree with Robert that if you have a lot of queries that select on \"incoming\" and/or \"inactive\", a conditional index (with a WHERE clause in its definition) is likely to be very helpful.\n \n-Kevin\n",
"msg_date": "Fri, 3 Aug 2012 16:57:50 +0100",
"msg_from": "Russell Keane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query using incorrect index"
}
] |
[
{
"msg_contents": "select abbrev,utc_offset,count(*) from pg_timezone_names\nwhere abbrev='EST'\ngroup by abbrev,utc_offset\n\nThere are 12 times zones with 'EST' code, offset = GMT+10. And there are \n8 time zones with 'EST' code, offset= GMT+5 at the same time!\n\nSo how much it is supposed to be?\n\nselect now() at time zone 'UTC' - now() at time zone 'EST'\n\n(Actually it returns +5:00 but what is the explanation?)\n\nAnd how am I supposed to convert a date to Australian zone? This doesn't \nwork:\n\nselect now() at time zone 'Australia/ATC' -- time zone \"Australia/ATC\" \nnot recognized\n\nBackground: we have a site where multiple users are storing data in the \nsame database. All dates are stored in UTC, but they are allowed to give \ntheir preferred time zone as a \"user preference\". So far so good. The \nusers saves the code of the time zone, and we convert all timestamps in \nall queries with their preferred time zone. But we got some complaints, \nand this is how I discovered the problem.\n\nActually, there are multiple duplications:\n\n\nselect abbrev,count(distinct utc_offset)\nfrom pg_timezone_names\ngroup by abbrev\nhaving count(distinct utc_offset)>1\norder by 2 desc\n\n\n\"CST\";3\n\"CDT\";2\n\"AST\";2\n\"GST\";2\n\"IST\";2\n\"WST\";2\n\"EST\";2\n\n\nHow should I store the user's preferred time zone, and how am I supposed \nto convert dates into that time zone?\n\nThanks,\n\n Laszlo\n\n",
"msg_date": "Fri, 03 Aug 2012 10:31:43 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Messed up time zones"
},
{
"msg_contents": "Re-sending this since I seem to have left out the list itself:\n\nOn Fri, Aug 3, 2012 at 4:31 PM, Laszlo Nagy <[email protected]> wrote:\n\n> select abbrev,utc_offset,count(*) from pg_timezone_names\n> where abbrev='EST'\n> group by abbrev,utc_offset\n>\n> There are 12 times zones with 'EST' code, offset = GMT+10. And there are 8\n> time zones with 'EST' code, offset= GMT+5 at the same time!\n>\n> So how much it is supposed to be?\n>\n> select now() at time zone 'UTC' - now() at time zone 'EST'\n>\n> (Actually it returns +5:00 but what is the explanation?)\n>\n> And how am I supposed to convert a date to Australian zone? This doesn't\n> work:\n>\n> select now() at time zone 'Australia/ATC' -- time zone \"Australia/ATC\" not\n> recognized\n>\n> Background: we have a site where multiple users are storing data in the\n> same database. All dates are stored in UTC, but they are allowed to give\n> their preferred time zone as a \"user preference\". So far so good. The users\n> saves the code of the time zone, and we convert all timestamps in all\n> queries with their preferred time zone. But we got some complaints, and\n> this is how I discovered the problem.\n>\n> Actually, there are multiple duplications:\n>\n>\n> select abbrev,count(distinct utc_offset)\n> from pg_timezone_names\n> group by abbrev\n> having count(distinct utc_offset)>1\n> order by 2 desc\n>\n>\n> \"CST\";3\n> \"CDT\";2\n> \"AST\";2\n> \"GST\";2\n> \"IST\";2\n> \"WST\";2\n> \"EST\";2\n>\n>\n> How should I store the user's preferred time zone, and how am I supposed\n> to convert dates into that time zone?\n>\n> Thanks,\n>\n> Laszlo\n>\n>\n> --\n> Sent via pgsql-admin mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/**mailpref/pgsql-admin<http://www.postgresql.org/mailpref/pgsql-admin>\n>\n\nIsn't:\n\nselect now() at time zone 'Australia/ATC'\n\nsupposed to be:\n\nselect now() at time zone 'Australia/ACT'\n\nAnd looking at the pg_timezone_names table for EST, there's only one entry\nfor EST:\n\nSELECT * from pg_timezone_names where name = 'EST';\n name | abbrev | utc_offset | is_dst\n------+--------+------------+--------\n EST | EST | -05:00:00 | f\n\n\n-- \nJC de Villa\n\nRe-sending this since I seem to have left out the list itself:On Fri, Aug 3, 2012 at 4:31 PM, Laszlo Nagy <[email protected]> wrote:\nselect abbrev,utc_offset,count(*) from pg_timezone_names\nwhere abbrev='EST'\ngroup by abbrev,utc_offset\n\nThere are 12 times zones with 'EST' code, offset = GMT+10. And there are 8 time zones with 'EST' code, offset= GMT+5 at the same time!\n\nSo how much it is supposed to be?\n\nselect now() at time zone 'UTC' - now() at time zone 'EST'\n\n(Actually it returns +5:00 but what is the explanation?)\n\nAnd how am I supposed to convert a date to Australian zone? This doesn't work:\n\nselect now() at time zone 'Australia/ATC' -- time zone \"Australia/ATC\" not recognized\n\nBackground: we have a site where multiple users are storing data in the same database. All dates are stored in UTC, but they are allowed to give their preferred time zone as a \"user preference\". So far so good. The users saves the code of the time zone, and we convert all timestamps in all queries with their preferred time zone. But we got some complaints, and this is how I discovered the problem.\n\nActually, there are multiple duplications:\n\n\nselect abbrev,count(distinct utc_offset)\nfrom pg_timezone_names\ngroup by abbrev\nhaving count(distinct utc_offset)>1\norder by 2 desc\n\n\n\"CST\";3\n\"CDT\";2\n\"AST\";2\n\"GST\";2\n\"IST\";2\n\"WST\";2\n\"EST\";2\n\n\nHow should I store the user's preferred time zone, and how am I supposed to convert dates into that time zone?\n\nThanks,\n\n Laszlo\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\nIsn't:\nselect now() at time zone 'Australia/ATC' \nsupposed to be:\nselect now() at time zone 'Australia/ACT' \nAnd looking at the pg_timezone_names table for EST, there's only one entry for EST:\n\nSELECT * from pg_timezone_names where name = 'EST'; name | abbrev | utc_offset | is_dst ------+--------+------------+-------- EST | EST | -05:00:00 | f\n-- JC de Villa",
"msg_date": "Fri, 3 Aug 2012 16:58:28 +0800",
"msg_from": "JC de Villa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": ">\n> Isn't:\n>\n> select now() at time zone 'Australia/ATC'\n>\n> supposed to be:\n>\n> select now() at time zone 'Australia/ACT'\nI see now. The abbreviation is usually a time zone name. But to be \ncorrect, the time zone name should be used (and not the abbreviation).\n>\n> And looking at the pg_timezone_names table for EST, there's only one \n> entry for EST:\n>\n> SELECT * from pg_timezone_names where name = 'EST';\n> name | abbrev | utc_offset | is_dst\n> ------+--------+------------+--------\n> EST | EST | -05:00:00 | f\n\nOkay, but that is the \"name\", and not the \"abbrev\" field. So time zone \nabbreviations are not unique? Then probably it is my fault - I thought \nthat they will be unique. It is still an interesting question, how \nothers interpret these (non-unique) abbreviations? But I guess that is \nnot related to PostgreSQL so I'm being offtopic here.\n\n\nOne last question. Am I right in that PostgreSQL does not handle leap \nseconds?\n\ntemplate1=> set time zone 'UTC';\ntemplate1=> select '2008-12-31 23:59:60'::timestamp;\n timestamp\n---------------------\n 2009-01-01 00:00:00\n(1 row)\n\nAnd probably intervals are affected too:\n\ntemplate1=> set time zone 'UTC';\ntemplate1=> select '2008-12-31 00:00:00'::timestamp + '48 hours'::interval;\n timestamp\n---------------------\n 2009-01-02 00:00:00\n(1 row)\n\nShould be '2009-01-01 23:59:59' instead.\n\nThanks,\n\n Laszlo\n\n\n\n\n\n\n\n\n\n\n\n\nIsn't:\n\n\n\nselect now() at time zone\n 'Australia/ATC' \n\n\n\n\nsupposed to be:\n\n\nselect\n now() at time zone 'Australia/ACT' \n\n\n I see now. The abbreviation is usually a time zone name. But to be\n correct, the time zone name should be used (and not the\n abbreviation).\n\n\n\n\nAnd looking at the\n pg_timezone_names table for EST, there's only one entry for\n EST:\n\n\n\n\nSELECT * from pg_timezone_names where name = 'EST';\n name | abbrev | utc_offset | is_dst \n------+--------+------------+--------\n EST | EST | -05:00:00 | f\n\n\n\n Okay, but that is the \"name\", and not the \"abbrev\" field. So time\n zone abbreviations are not unique? Then probably it is my fault - I\n thought that they will be unique. It is still an interesting\n question, how others interpret these (non-unique) abbreviations? But\n I guess that is not related to PostgreSQL so I'm being offtopic\n here.\n\n\n One last question. Am I right in that PostgreSQL does not handle\n leap seconds?\n\n template1=> set time zone 'UTC';\n template1=> select '2008-12-31 23:59:60'::timestamp;\n timestamp \n ---------------------\n 2009-01-01 00:00:00\n (1 row)\n\n And probably intervals are affected too:\n\n template1=> set time zone 'UTC';\n template1=> select '2008-12-31 00:00:00'::timestamp + '48\n hours'::interval;\n timestamp \n ---------------------\n 2009-01-02 00:00:00\n (1 row)\n\n Should be '2009-01-01 23:59:59' instead.\n\n Thanks,\n\n Laszlo",
"msg_date": "Fri, 03 Aug 2012 11:18:35 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "On Fri, Aug 3, 2012 at 5:18 PM, Laszlo Nagy <[email protected]> wrote:\n\n>\n>\n> Isn't:\n>\n> select now() at time zone 'Australia/ATC'\n>\n> supposed to be:\n>\n> select now() at time zone 'Australia/ACT'\n>\n> I see now. The abbreviation is usually a time zone name. But to be\n> correct, the time zone name should be used (and not the abbreviation).\n>\n>\n> And looking at the pg_timezone_names table for EST, there's only one\n> entry for EST:\n>\n> SELECT * from pg_timezone_names where name = 'EST';\n> name | abbrev | utc_offset | is_dst\n> ------+--------+------------+--------\n> EST | EST | -05:00:00 | f\n>\n>\n> Okay, but that is the \"name\", and not the \"abbrev\" field. So time zone\n> abbreviations are not unique? Then probably it is my fault - I thought that\n> they will be unique. It is still an interesting question, how others\n> interpret these (non-unique) abbreviations? But I guess that is not related\n> to PostgreSQL so I'm being offtopic here.\n>\n>\n> One last question. Am I right in that PostgreSQL does not handle leap\n> seconds?\n>\n> template1=> set time zone 'UTC';\n> template1=> select '2008-12-31 23:59:60'::timestamp;\n> timestamp\n> ---------------------\n> 2009-01-01 00:00:00\n> (1 row)\n>\n> And probably intervals are affected too:\n>\n> template1=> set time zone 'UTC';\n> template1=> select '2008-12-31 00:00:00'::timestamp + '48 hours'::interval;\n> timestamp\n> ---------------------\n> 2009-01-02 00:00:00\n> (1 row)\n>\n> Should be '2009-01-01 23:59:59' instead.\n>\n> Thanks,\n>\n> Laszlo\n>\n>\n>\nWell, per the docs at\nhttp://www.postgresql.org/docs/9.1/static/functions-datetime.html, in\nparens under timezone:\n\n\"Technically, PostgreSQL uses UT1because leap seconds are not handled.\"\n\nAlthough there is a footnote on that page that states that:\n\n\"60 if leap seconds are implemented by the operating system\".\n\n-- \nJC de Villa\n\nOn Fri, Aug 3, 2012 at 5:18 PM, Laszlo Nagy <[email protected]> wrote:\n\n\n\n\n\nIsn't:\n\n\n\nselect now() at time zone\n 'Australia/ATC' \n\n\n\n\nsupposed to be:\n\n\nselect\n now() at time zone 'Australia/ACT' \n\n\n I see now. The abbreviation is usually a time zone name. But to be\n correct, the time zone name should be used (and not the\n abbreviation).\n\n\n\n\nAnd looking at the\n pg_timezone_names table for EST, there's only one entry for\n EST:\n\n\n\n\nSELECT * from pg_timezone_names where name = 'EST';\n name | abbrev | utc_offset | is_dst \n------+--------+------------+--------\n EST | EST | -05:00:00 | f\n\n\n\n Okay, but that is the \"name\", and not the \"abbrev\" field. So time\n zone abbreviations are not unique? Then probably it is my fault - I\n thought that they will be unique. It is still an interesting\n question, how others interpret these (non-unique) abbreviations? But\n I guess that is not related to PostgreSQL so I'm being offtopic\n here.\n\n\n One last question. Am I right in that PostgreSQL does not handle\n leap seconds?\n\n template1=> set time zone 'UTC';\n template1=> select '2008-12-31 23:59:60'::timestamp;\n timestamp \n ---------------------\n 2009-01-01 00:00:00\n (1 row)\n\n And probably intervals are affected too:\n\n template1=> set time zone 'UTC';\n template1=> select '2008-12-31 00:00:00'::timestamp + '48\n hours'::interval;\n timestamp \n ---------------------\n 2009-01-02 00:00:00\n (1 row)\n\n Should be '2009-01-01 23:59:59' instead.\n\n Thanks,\n\n Laszlo\n\n\n\nWell, per the docs at http://www.postgresql.org/docs/9.1/static/functions-datetime.html, in parens under timezone:\n\"Technically, PostgreSQL uses UT1because leap seconds are not handled.\"\nAlthough there is a footnote on that page that states that:\n\"60 if leap seconds are implemented by the operating system\".\n-- JC de Villa",
"msg_date": "Fri, 3 Aug 2012 17:48:23 +0800",
"msg_from": "JC de Villa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "On 2012-08-03 10:31, Laszlo Nagy wrote:\n> select abbrev,utc_offset,count(*) from pg_timezone_names\n> where abbrev='EST'\n> group by abbrev,utc_offset\n>\n> There are 12 times zones with 'EST' code, offset = GMT+10. And there \n> are 8 time zones with 'EST' code, offset= GMT+5 at the same time!\nSorry, I still have some questions.\n\ntemplate1=> set time zone 'UTC';\ntemplate1=> select to_char(('2011-10-30 00:00:00'::timestamp at time \nzone 'UTC') at time zone 'Europe/Budapest', 'YYYY-MM-DD HH24:MI:SS TZ');\n to_char\n----------------------\n 2011-10-30 02:00:00\n(1 row)\n\ntemplate1=> select to_char(('2011-10-30 01:00:00'::timestamp at time \nzone 'UTC') at time zone 'Europe/Budapest', 'YYYY-MM-DD HH24:MI:SS TZ');\n to_char\n----------------------\n 2011-10-30 02:00:00\n(1 row)\n\n\nThe time zone was not included in the output. I guess it is because the \nlast \"at time zone\" part converted the timestamptz into a timestamp. \nRight now, these results don't just look the same. They are actually the \nsame values, which is obviously not what I want. They have been \nconverted from different UTC values, so they should be different. I \nwould like to see \"2011-10-30 02:00:00+0600\" and \"2011-10-30 \n02:00:00+0500\", or something similar.\n\nSo the question is: how do I convert a timestamptz value into a \ndifferent time zone, without changing its type? E.g. it should remain a \ntimestamptz, but have a (possibly) different value and a different time \nzone assigned.\n\nThanks,\n\n Laszlo\n\n\n",
"msg_date": "Fri, 03 Aug 2012 12:40:04 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "Here is a better example that shows what I actually have in my database. \nSuppose I have this table, with UTC timestamps in it:\n\ntemplate1=> create table test ( a timestamptz not null primary key );\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n\"test_pkey\" for table \"test\"\nCREATE TABLE\ntemplate1=> insert into test values ('2011-10-30 00:00:00'::timestamp at \ntime zone 'UTC');\nINSERT 0 1\ntemplate1=> insert into test values ('2011-10-30 01:00:00'::timestamp at \ntime zone 'UTC');\nINSERT 0 1\ntemplate1=> set datestyle to \"postgres, postgres\";\nSET\ntemplate1=> select * from test;\n a\n------------------------------\n Sun Oct 30 00:00:00 2011 UTC\n Sun Oct 30 01:00:00 2011 UTC\n(2 rows)\n\n\nI would like to see the same values, just converted into a different \ntime zone. But still have timestamptz type!\n\nSo I try this:\n\n\ntemplate1=> select a at time zone 'Europe/Budapest' from test;\n timezone\n--------------------------\n Sun Oct 30 02:00:00 2011\n Sun Oct 30 02:00:00 2011\n(2 rows)\n\nWhich is not good, because the zone information was lost, and so I see \nidentical values, but they should be different.\n\nCasting to timestamptz doesn't help either, because casting happens \nafter the time zone information was lost:\n\ntemplate1=> select (a at time zone 'Europe/Budapest')::timestamptz from \ntest;\n timezone\n------------------------------\n Sun Oct 30 02:00:00 2011 UTC\n Sun Oct 30 02:00:00 2011 UTC\n(2 rows)\n\ntemplate1=>\n\nSo how do I create a query that results in something like:\n\n a\n------------------------------\n Sun Oct 30 02:00:00 2011 +0500\n Sun Oct 30 02:00:00 2011 +0600\n(2 rows)\n\n\n",
"msg_date": "Fri, 03 Aug 2012 12:55:47 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "Laszlo Nagy <[email protected]> writes:\n> So how do I create a query that results in something like:\n\n> a\n> ------------------------------\n> Sun Oct 30 02:00:00 2011 +0500\n> Sun Oct 30 02:00:00 2011 +0600\n> (2 rows)\n\nSet the \"timezone\" setting to the zone you have in mind, and then just\nprint the values. The reason there's no manual way to do rotation\nacross zones is that there's no need for one because it's done\nautomatically during printout of a timestamptz value.\n\nI suspect that you have not correctly internalized what timestamptz\nvalues actually are. Internally they are just time values specified in\nUTC (or UT1 if you want to be picky). On input, the value is rotated\nfrom whatever zone is specified in the string (or implicitly specified\nby \"timezone\") to UTC. On output, the value is rotated from UTC to\nwhatever the current \"timezone\" setting is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2012 10:19:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "On 2012-08-03 16:19, Tom Lane wrote:\n> Laszlo Nagy <[email protected]> writes:\n>> So how do I create a query that results in something like:\n>> a\n>> ------------------------------\n>> Sun Oct 30 02:00:00 2011 +0500\n>> Sun Oct 30 02:00:00 2011 +0600\n>> (2 rows)\n> Set the \"timezone\" setting to the zone you have in mind, and then just\n> print the values.\n\nmajorforms=> set time zone 'Europe/Budapest';\nSET\nmajorforms=> select * from test;\n a\n------------------------\n 2011-10-30 02:00:00+02\n 2011-10-30 02:00:00+01\n(2 rows)\n\nmajorforms=>\n\nIt works. Thank you!\n\nSo is it impossible to construct a query with columns that are different \ntime zones? I hope I'm not going to need that. :-)\n\n> The reason there's no manual way to do rotation\n> across zones is that there's no need for one because it's done\n> automatically during printout of a timestamptz value.\nI can come up with an example when it would be needed. For example, \nconsider a company with two sites in different time zones. Let's say \nthat they want to store time stamps of online meetings. They need to \ncreate a report that shows the starting time of the all meetings *in \nboth zones*. I see no way to do this in PostgreSQL. Of course, you can \nalways select the timestamps in UTC, and convert them into other time \nzones with a program so it is not a big problem. And if we go that \nroute, then there is not much point in using the timestamptz type, since \nwe already have to convert the values with a program...\n\n>\n> I suspect that you have not correctly internalized what timestamptz\n> values actually are. Internally they are just time values specified in\n> UTC (or UT1 if you want to be picky). On input, the value is rotated\n> from whatever zone is specified in the string (or implicitly specified\n> by \"timezone\") to UTC. On output, the value is rotated from UTC to\n> whatever the current \"timezone\" setting is.\nOh I see. So actually they don't store the zone? I have seen that \ntimestamptz and timestamp both occupy 8 bytes, but I didn't understand \ncompletely.\n\nIt also means that if I want to store the actual time zone (in what the \nvalue was originally recorded), then I have to store the zone in a \nseparate field. Later I can convert back to the original time zone, but \nonly with an external program.\n\nFine with me. I'm happy with this, just I did not understand how it works.\n\nThanks,\n\n Laszlo\n\n",
"msg_date": "Fri, 03 Aug 2012 17:23:51 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "On 8/3/2012 11:23 AM, Laszlo Nagy wrote:\n>> I suspect that you have not correctly internalized what timestamptz\n>> values actually are. Internally they are just time values specified in\n>> UTC (or UT1 if you want to be picky). On input, the value is rotated\n>> from whatever zone is specified in the string (or implicitly specified\n>> by \"timezone\") to UTC. On output, the value is rotated from UTC to\n>> whatever the current \"timezone\" setting is.\n> Oh I see. So actually they don't store the zone? I have seen that timestamptz and timestamp both occupy 8 bytes, but I didn't understand completely.\n>\n> It also means that if I want to store the actual time zone (in what the value was originally recorded), then I have to store the zone in a separate field. Later I can convert back to the original time zone, but only with an external program.\n>\n> Fine with me. I'm happy with this, just I did not understand how it works.\n\nYou could store the zone in a separate field and then create a VIEW on the table that used a function to take both values and return the timestamptz just as it was inserted.\n",
"msg_date": "Fri, 03 Aug 2012 11:34:20 -0400",
"msg_from": "Bill MacArthur <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "\n> You could store the zone in a separate field and then create a VIEW on \n> the table that used a function to take both values and return the \n> timestamptz just as it was inserted.\n>\nWell no, it is not possible. A timestamptz value is interpreted as UTC, \nregardless of your local timezone. A timestamp value is interpreted in \nyour local time zone. This is the main difference between them. You can \nchange *the interpretation* of these values with the \"at time zone\" \nexpression. But you cannot convert between time zones at all! Time zone \ninformation is not stored in any way - it is a global setting.\n\nI have intentionally chosen an example where the local time is changed \nfrom summer time to winter time (e.g. local time suddenly \"goes back\" \none hour). It demonstrates that you cannot use \"at time zone ....\" \nexpression to convert a timestamptz into a desired time zone manually.\n\nThe only case when time zone conversion occurs is when you format the \ntimestamp/timestamptz value into a text. As Tom Lane pointed out, the \nonly correct way to convert a timestamptz/timestamp value into a desired \ntime zone is to use the \"set time zone to ....\" command. But that \ncommand has a global effect, and it does not actually change the zone of \nthe stored value (because the time zone is not stored at all). It just \nchanges the formatting of those values, and as a result, you will get a \ncorrect textual representation of the original timestamp value in the \ndesired time zone. But you will *never* be able to get a correct \ntimestamp value in a desired time zone. All you can get is text.\n\nAs far as I'm concerned, I'm going to set the system's clock to UTC, \nstore everything in timestamp field (in UTC), and use a program to \nconvert fetched values before displaying them.\n\nRegards,\n\n Laszlo\n\n",
"msg_date": "Fri, 03 Aug 2012 18:06:38 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "On 08/03/2012 08:23 AM, Laszlo Nagy wrote:\n> ...\n>\n> It works. Thank you!\n>\n> So is it impossible to construct a query with columns that are \n> different time zones? I hope I'm not going to need that. :-)\n>\n\nI'm not sure you have internalized the meaning of timestamptz. It helps \nto instead think of it as a \"point in time\", i.e. the shuttle launched at...\n\nselect\nnow() at time zone 'UTC' as \"UTC\",\nnow() at time zone 'Asia/Urumqi' as \"Urumqi\",\nnow() at time zone 'Asia/Katmandu' as \"Katmandu\",\nnow() at time zone 'America/Martinique' as \"Martinique\",\nnow() at time zone 'America/Kralendijk' as \"Kralendijk\",\nnow() at time zone 'Africa/Algiers' as \"Algiers\",\nnow() at time zone 'Europe/Zurich' as \"Zurich\",\nnow() at time zone 'Australia/Brisbane' as \"Brisbane\",\nnow() at time zone 'Pacific/Galapagos' as \"Galapagos\"\n;\n\n-[ RECORD 1 ]--------------------------\nUTC | 2012-08-03 15:54:49.645586\nUrumqi | 2012-08-03 23:54:49.645586\nKatmandu | 2012-08-03 21:39:49.645586\nMartinique | 2012-08-03 11:54:49.645586\nKralendijk | 2012-08-03 11:54:49.645586\nAlgiers | 2012-08-03 16:54:49.645586\nZurich | 2012-08-03 17:54:49.645586\nBrisbane | 2012-08-04 01:54:49.645586\nGalapagos | 2012-08-03 09:54:49.645586\n\nAll the above are the exact same point in time merely stated as relevant \nto each location. Note that given a timestamp with time zone and a zone, \nPostgreSQL returns a timestamp without time zone (you know the zone \nsince you specified it). Conversely, given a local time (timestamp with \nout time zone) and a known location you can get the point in time \n(timestamptz):\n\nselect\n'2012-08-03 15:54:49.645586 UTC'::timestamptz,\n'2012-08-03 15:54:49.645586 Asia/Urumqi'::timestamptz,\n'2012-08-03 15:54:49.645586 Asia/Katmandu'::timestamptz,\n'2012-08-03 15:54:49.645586 America/Martinique'::timestamptz,\n'2012-08-03 15:54:49.645586 America/Kralendijk'::timestamptz,\n'2012-08-03 15:54:49.645586 Africa/Algiers'::timestamptz,\n'2012-08-03 15:54:49.645586 Europe/Zurich'::timestamptz,\n'2012-08-03 15:54:49.645586 Australia/Brisbane'::timestamptz,\n'2012-08-03 15:54:49.645586 Pacific/Galapagos'::timestamptz\n;\n\n-[ RECORD 1 ]------------------------------\ntimestamptz | 2012-08-03 08:54:49.645586-07\ntimestamptz | 2012-08-03 00:54:49.645586-07\ntimestamptz | 2012-08-03 03:09:49.645586-07\ntimestamptz | 2012-08-03 12:54:49.645586-07\ntimestamptz | 2012-08-03 12:54:49.645586-07\ntimestamptz | 2012-08-03 07:54:49.645586-07\ntimestamptz | 2012-08-03 06:54:49.645586-07\ntimestamptz | 2012-08-02 22:54:49.645586-07\ntimestamptz | 2012-08-03 14:54:49.645586-07\n\nI'm currently in Pacific Daylight Time hence the -07. But note that you \ncan specify an offset (-07) that is not the same as \n'America/Los_Angeles'. -07 is an offset, 'America/Los_Angeles' is a time \nzone and deals appropriately with Daylight Saving Time and the various \nchanges thereto through history.\n\nShould it be necessary, you could save time zone information in a \nseparate column. Note that you can specify time zone as a characteristic \nof a user if your database handles users across multiple zones (alter \nuser steve set timezone to 'America/Los_Angeles';)\n\nIt takes a bit of reading and experimenting to understand the subtleties \nof date/time handling but it's time well spent.\n\nCheers,\nSteve\n\n",
"msg_date": "Fri, 03 Aug 2012 09:20:31 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "Laszlo Nagy <[email protected]> writes:\n> I have intentionally chosen an example where the local time is changed \n> from summer time to winter time (e.g. local time suddenly \"goes back\" \n> one hour). It demonstrates that you cannot use \"at time zone ....\" \n> expression to convert a timestamptz into a desired time zone manually.\n\nUm, yes you can. The trick is to use a timezone name, not an\nabbreviation, in the AT TIME ZONE construct (for instance,\n'Europe/Budapest' not just 'CET'). That will do the rotation\nin a DST-aware fashion.\n\n> As far as I'm concerned, I'm going to set the system's clock to UTC, \n> store everything in timestamp field (in UTC), and use a program to \n> convert fetched values before displaying them.\n\n[ shrug... ] If you really insist on re-inventing that wheel, go\nahead, but it sounds to me like you'll just be introducing additional\npoints of failure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Aug 2012 12:38:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "\n> All the above are the exact same point in time merely stated as \n> relevant to each location. Note that given a timestamp with time zone \n> and a zone, PostgreSQL returns a timestamp without time zone (you know \n> the zone since you specified it). \nYes, I know the zone. But I don't know the offset from UTC.\n\nExample:\n\ntemplate1=> set timezone to 'UTC';\nSET\ntemplate1=> select ('2011-10-30 01:00:00'::timestamptz) at time zone \n'Europe/Budapest';\n timezone\n---------------------\n 2011-10-30 02:00:00 -- Is it winter or summer time?\n(1 row)\n\ntemplate1=> select ('2011-10-30 00:00:00'::timestamptz) at time zone \n'Europe/Budapest';\n timezone\n---------------------\n 2011-10-30 02:00:00 -- Is it winter or summer time? What is the \noffset from UTC here? Can you tell me when it was in UTC?\n(1 row)\n\ntemplate1=>\n\nWhat is more:\n\ntemplate1=> select (('2011-10-30 00:00:00'::timestamptz) at time zone \n'Europe/Budapest') is distinct from (('2011-10-30 \n01:00:00'::timestamptz) at time zone 'Europe/Budapest');\n ?column?\n----------\n f\n(1 row)\n\ntemplate1=>\n\nYeah, we know what time zone it is in, but we don't know when it was, \nthanks a lot. :-( It would be unambiguous to store the UTC offset along \nwith the value. But it is not how it was implemented.\n",
"msg_date": "Fri, 03 Aug 2012 19:21:08 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] Messed up time zones"
},
{
"msg_contents": "2012.08.03. 18:38 keltez�ssel, Tom Lane �rta:\n> Laszlo Nagy <[email protected]> writes:\n>> I have intentionally chosen an example where the local time is changed\n>> from summer time to winter time (e.g. local time suddenly \"goes back\"\n>> one hour). It demonstrates that you cannot use \"at time zone ....\"\n>> expression to convert a timestamptz into a desired time zone manually.\n> Um, yes you can. The trick is to use a timezone name, not an\n> abbreviation, in the AT TIME ZONE construct (for instance,\n> 'Europe/Budapest' not just 'CET'). That will do the rotation\n> in a DST-aware fashion.\nAnd loose information at the same time. Because after the conversion, \nyou won't be able to tell if it is a summer or a winter time. So yes, \nyou are right. You can do that kind of conversion, but then sometimes \nyou won't know when it was, or what it means. This problem could be \nsolved by storing the UTC offset together with the time zone, internally \nin PostgreSQL.\n\nMaybe, if that is not a problem for the user, he can use \"at time zone\" \nfor converting between time zones. Personally, I will stick with UTC and \nuse a program to convert values, because I would like to know when it \nwas. :-)\n\n",
"msg_date": "Fri, 03 Aug 2012 19:25:48 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": " > [ shrug... ] If you really insist on re-inventing that wheel, go \nahead, but it sounds to me like you'll just be introducing additional \npoints of failure. regards, tom lane\n\nI just checked some programming languages (Python, C#), and the same \nproblem exists there. All of them say that \"when the time is ambiguous, \nthen it is assumed to be in standard time\". So the representation is \nambiguous in various programming languages too. You are right - it would \nbe reinventing the wheel.\n\nAlthough I don't like the fact that we are using an ambiguous system for \nmeasuring time, after all the problem was in my head. I'm sorry for \nbeing hardheaded.\n",
"msg_date": "Fri, 03 Aug 2012 20:25:32 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Messed up time zones"
},
{
"msg_contents": "On 08/03/2012 10:21 AM, Laszlo Nagy wrote:\n>\n>> All the above are the exact same point in time merely stated as \n>> relevant to each location. Note that given a timestamp with time zone \n>> and a zone, PostgreSQL returns a timestamp without time zone (you \n>> know the zone since you specified it). \n> Yes, I know the zone. But I don't know the offset from UTC.\n>\n> Example:\n>\n> template1=> set timezone to 'UTC';\n> SET\n> template1=> select ('2011-10-30 01:00:00'::timestamptz) at time zone \n> 'Europe/Budapest';\n> timezone\n> ---------------------\n> 2011-10-30 02:00:00 -- Is it winter or summer time?\n> (1 row)\n>\n> template1=> select ('2011-10-30 00:00:00'::timestamptz) at time zone \n> 'Europe/Budapest';\n> timezone\n> ---------------------\n> 2011-10-30 02:00:00 -- Is it winter or summer time? What is the \n> offset from UTC here? Can you tell me when it was in UTC?\n> (1 row)\n>\n> template1=>\n>\n\nI can not from the given information. Can you? The given information is \nambiguous as are all times during the hour of fall-back everywhere. That \nleaves developers with a choice: choose an interpretation or throw an \nerror. PostgreSQL chooses to use an interpretation.\n\nIt would be nice if there were a specification as to how such ambiguous \ndata should be interpreted. Perhaps someone can point me to one and to \nany relevant documentation detailing how PostgreSQL handles such data. \nAs it is, you need to be aware of how each part of your system deals \nwith such. For example (using my local time zone) using the date command \non Linux I see that\n\"date -d '2012-11-04 0130'\"\nreturns\n\"Sun Nov 4 01:30:00 PDT 2012\" (Still in Daylight Saving Time)\n\nBut given the same input, PostgreSQL interprets it as standard time \n(offset -08):\nselect '2012-11-04 0130'::timestamptz;\n timestamptz\n------------------------\n 2012-11-04 01:30:00-08\n\n> What is more:\n>\n> template1=> select (('2011-10-30 00:00:00'::timestamptz) at time zone \n> 'Europe/Budapest') is distinct from (('2011-10-30 \n> 01:00:00'::timestamptz) at time zone 'Europe/Budapest');\n> ?column?\n> ----------\n> f\n> (1 row)\n>\n> template1=>\n>\n> Yeah, we know what time zone it is in, but we don't know when it was, \n> thanks a lot. :-( It would be unambiguous to store the UTC offset \n> along with the value. But it is not how it was implemented.\n>\n>\nSo you took two distinct points in time, threw away some critical \ninformation, and are surprised why they are now equal? Then don't do \nthat. It's the equivalent of being surprised that www.microsoft.com is \nthe same as www.apple.com when comparing them on the short hostname \nonly. If you want to know if two points in time differ, just compare them.\n\nSpending a couple hours reading \nhttp://www.postgresql.org/docs/current/static/datatype-datetime.html \nwill be time well spent.\n\nCheers,\nSteve\n\n",
"msg_date": "Fri, 03 Aug 2012 11:25:53 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Messed up time zones"
},
{
"msg_contents": "\n> So you took two distinct points in time, threw away some critical \n> information, and are surprised why they are now equal?\nWell, I did not want to throw away any information. The actual \nrepresentation could be something like:\n\n\"2012-11-04 01:30:00-08 in Europe/Budapest, Winter time\"\n\nand\n\n\"2012-11-04 01:30:00-08 in Europe/Budapest, Summer time\".\n\nIt would be unambiguous, everybody would know the time zone, the UTC \noffset and the time value, and conversion back to UTC would be \nunambiguous too.\n\nI presumed that the representation is like that. But I was wrong. I have \nchecked other programming languages. As it turns out, nobody wants to \nchange the representation just because there can be an ambiguous hour in \nevery year. Now I think that most systems treat ambiguous time stamps as \nif they were in standard time. And who am I to go against the main flow? \nI'm sorry, I admit that the problem was in my head.\n\n",
"msg_date": "Fri, 03 Aug 2012 20:37:45 +0200",
"msg_from": "Laszlo Nagy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] Messed up time zones"
}
] |
[
{
"msg_contents": "I'm having a problem with a query on our production server, but not on a laptop running a similar postgres version with a recent backup copy of the same table. I tried reindexing the table on the production server, but it didn't make any difference. Other queries on the same table are plenty fast. \n\nThis query has been slow, but never like this, particularly during a period when there are only a couple of connections in use. \n\nVacuum and analyze are run nightly (and show as such in pg_stat_user_tables) in addition to autovacuum during the day. Here are my autovacuum settings, but when I checked last_autovacuum & last_autoanalyze in pg_stat_user_tables those fields were blank. \n\nautovacuum = on \nlog_autovacuum_min_duration = 10 \nautovacuum_max_workers = 3 \nautovacuum_naptime = 1min \nautovacuum_vacuum_threshold = 50 \nautovacuum_analyze_threshold = 50 \nautovacuum_vacuum_scale_factor = 0.2 \nautovacuum_analyze_scale_factor = 0.1 \nautovacuum_freeze_max_age = 200000000 \nautovacuum_vacuum_cost_delay = 10ms (changed earlier today from 1000ms) \nautovacuum_vacuum_cost_limit = -1\n\nwal_level = minimal\nwal_buffers = 16MB\n\nThe only recent change was moving the 3 databases we have from multiple raid 1 drives with tablespaces spread all over to one large raid10 with indexes and data in pg_default. WAL for this table was moved as well.\n\nDoes anyone have any suggestions on where to look for the problem? \n\nclientlog table info:\n\nSize: 1.94G\n\n Column | Type | Modifiers \n----------+-----------------------------+-----------\n pid0 | integer | not null\n rid | integer | not null\n verb | character varying(32) | not null\n noun | character varying(32) | not null\n detail | text | \n path | character varying(256) | not null\n ts | timestamp without time zone | \n applies2 | integer | \n toname | character varying(128) | \n byname | character varying(128) | \nIndexes:\n \"clientlog_applies2\" btree (applies2)\n \"clientlog_pid0_key\" btree (pid0)\n \"clientlog_rid_key\" btree (rid)\n \"clientlog_ts\" btree (ts)\n\nThe query, hardware info, and links to both plans:\n\nexplain analyze select max(ts) as ts from clientlog where applies2=256;\n\nProduction server:\n- 4 dual-core AMD Opteron 2212 processors, 2010.485 MHz\n- 64GB RAM\n- 464GB RAID10 drive \n- Linux 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\n\nhttp://explain.depesz.com/s/8R4\n \n\n From laptop running Linux 2.6.34.9-69.fc13.868 with 3G ram against a copy of the same table:\nPostgreSQL 9.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.4.4 20100630 (Red Hat 4.4.4-10), 32-bit\n\nhttp://explain.depesz.com/s/NQl\n\nThank you,\nMidge\n\n\n\n\n\n\n\nI'm having a \nproblem with a query on our production server, but not on a laptop running a \nsimilar postgres version with a recent backup copy of the same table. \nI tried reindexing the table on the production \nserver, but it didn't make any difference. Other queries on the same table are \nplenty fast. \n \nThis query has been slow, but never like this, \nparticularly during a period when there are only a couple of connections in use. \n\n \nVacuum and analyze are run nightly (and \nshow as such in pg_stat_user_tables) in addition to autovacuum during \nthe day. Here are my autovacuum settings, but when I checked \nlast_autovacuum & last_autoanalyze in pg_stat_user_tables those fields \nwere blank. \n \nautovacuum = \non \nlog_autovacuum_min_duration = 10 \nautovacuum_max_workers = \n3 \nautovacuum_naptime = \n1min \nautovacuum_vacuum_threshold = 50 \nautovacuum_analyze_threshold = 50 \nautovacuum_vacuum_scale_factor = 0.2 \nautovacuum_analyze_scale_factor = 0.1 \nautovacuum_freeze_max_age = 200000000 \nautovacuum_vacuum_cost_delay = 10ms (changed earlier today from \n1000ms) autovacuum_vacuum_cost_limit = -1\n \nwal_level = minimal\nwal_buffers = 16MB\n \nThe only recent change was moving the 3 databases \nwe have from multiple raid 1 drives with tablespaces spread all over to one \nlarge raid10 with indexes and data in pg_default. WAL for this table was moved \nas well.\n \nDoes anyone have any suggestions on where to look \nfor the problem? \n \nclientlog table info:\n \nSize: 1.94G\n \n Column \n| \nType | \nModifiers \n----------+-----------------------------+----------- pid0 \n| \ninteger \n| not null rid | \ninteger \n| not null verb | character \nvarying(32) | not \nnull noun | character \nvarying(32) | not \nnull detail | \ntext \n| path | character \nvarying(256) | not \nnull ts | timestamp without time \nzone | applies2 | \ninteger \n| toname | character \nvarying(128) | byname | \ncharacter varying(128) | \nIndexes: \"clientlog_applies2\" btree \n(applies2) \"clientlog_pid0_key\" btree \n(pid0) \"clientlog_rid_key\" btree \n(rid) \"clientlog_ts\" btree (ts)\nThe query, hardware info, and links to both \nplans:\n \nexplain analyze select max(ts) as ts from \nclientlog where applies2=256;\n \n\nProduction server:\n\n- 4 dual-core AMD Opteron 2212 processors, \n2010.485 MHz- 64GB RAM- 464GB RAID10 drive - Linux 2.6.18-164.el5 #1 \nSMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n PostgreSQL 9.0.4 on \nx86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat \n4.1.2-46), 64-bit\nhttp://explain.depesz.com/s/8R4 \n \nFrom laptop running Linux 2.6.34.9-69.fc13.868 \nwith 3G ram against a copy of the same table:\nPostgreSQL 9.0.2 on i686-pc-linux-gnu, compiled \nby GCC gcc (GCC) 4.4.4 20100630 (Red Hat 4.4.4-10), 32-bit\n \nhttp://explain.depesz.com/s/NQl\n \nThank you,\nMidge",
"msg_date": "Fri, 3 Aug 2012 17:38:33 -0700",
"msg_from": "\"Midge Brown\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow query, different plans"
},
{
"msg_contents": "Midge --\n\nSorry for top-quoting -- challenged mail.\n\nPerhaps a difference in the stats estimates -- default_statistics_target ?\n\nCan you show us a diff between the postgres config files for each instance ? Maybe something there ...\n\nGreg Williamson\n\n\n\n>________________________________\n> From: Midge Brown <[email protected]>\n>To: [email protected] \n>Sent: Friday, August 3, 2012 5:38 PM\n>Subject: [PERFORM] slow query, different plans\n> \n>\n> \n>I'm having a \nproblem with a query on our production server, but not on a laptop running a \nsimilar postgres version with a recent backup copy of the same table. I tried reindexing the table on the production \nserver, but it didn't make any difference. Other queries on the same table are \nplenty fast. \n> \n>This query has been slow, but never like this, \nparticularly during a period when there are only a couple of connections in use. \n> \n>Vacuum and analyze are run nightly (and \nshow as such in pg_stat_user_tables) in addition to autovacuum during \nthe day. Here are my autovacuum settings, but when I checked \nlast_autovacuum & last_autoanalyze in pg_stat_user_tables those fields \nwere blank. \n> \n>autovacuum = \non \n>log_autovacuum_min_duration = 10 \n>autovacuum_max_workers = \n3 \n>autovacuum_naptime = \n1min \n>autovacuum_vacuum_threshold = 50 \n>autovacuum_analyze_threshold = 50 \n>autovacuum_vacuum_scale_factor = 0.2 \n>autovacuum_analyze_scale_factor = 0.1 \n>autovacuum_freeze_max_age = 200000000 \n>autovacuum_vacuum_cost_delay = 10ms (changed earlier today from \n1000ms) \n>autovacuum_vacuum_cost_limit = -1\n> \n>wal_level = minimal\n>wal_buffers = 16MB\n> \n>The only recent change was moving the 3 databases \nwe have from multiple raid 1 drives with tablespaces spread all over to one \nlarge raid10 with indexes and data in pg_default. WAL for this table was moved \nas well.\n> \n>Does anyone have any suggestions on where to look \nfor the problem? \n> \n>clientlog table info:\n> \n>Size: 1.94G\n> \n> Column \n| \nType | \nModifiers \n>----------+-----------------------------+-----------\n> pid0 \n| \ninteger \n| not null\n> rid | \ninteger \n| not null\n> verb | character \nvarying(32) | not \nnull\n> noun | character \nvarying(32) | not \nnull\n> detail | \ntext \n| \n> path | character \nvarying(256) | not \nnull\n> ts | timestamp without time \nzone | \n> applies2 | \ninteger \n| \n> toname | character \nvarying(128) | \n> byname | \ncharacter varying(128) | \n>Indexes:\n> \"clientlog_applies2\" btree \n(applies2)\n> \"clientlog_pid0_key\" btree \n(pid0)\n> \"clientlog_rid_key\" btree \n(rid)\n> \"clientlog_ts\" btree (ts)\n>\n>The query, hardware info, and links to both \nplans:\n> \n>explain analyze select max(ts) as ts from \nclientlog where applies2=256;\n> \n>Production server:\n>- 4 dual-core AMD Opteron 2212 processors, \n2010.485 MHz\n>- 64GB RAM\n>- 464GB RAID10 drive \n>- Linux 2.6.18-164.el5 #1 \nSMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n> PostgreSQL 9.0.4 on \nx86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat \n4.1.2-46), 64-bit\n>\n>http://explain.depesz.com/s/8R4\n> \n> \n>From laptop running Linux 2.6.34.9-69.fc13.868 \nwith 3G ram against a copy of the same table:\n>PostgreSQL 9.0.2 on i686-pc-linux-gnu, compiled \nby GCC gcc (GCC) 4.4.4 20100630 (Red Hat 4.4.4-10), 32-bit\n> \n>http://explain.depesz.com/s/NQl\n> \n>Thank you,\n>Midge\n> \n>\n>\nMidge --Sorry for top-quoting -- challenged mail.Perhaps a difference in the stats estimates -- default_statistics_target ?Can you show us a diff between the postgres config files for each instance ? Maybe something there ...Greg Williamson From: Midge Brown <[email protected]> To: [email protected] Sent: Friday, August 3, 2012 5:38 PM Subject: [PERFORM] slow query, different plans \n\n\n\nI'm having a \nproblem with a query on our production server, but not on a laptop running a \nsimilar postgres version with a recent backup copy of the same table. \nI tried reindexing the table on the production \nserver, but it didn't make any difference. Other queries on the same table are \nplenty fast. \n \nThis query has been slow, but never like this, \nparticularly during a period when there are only a couple of connections in use. \n\n \nVacuum and analyze are run nightly (and \nshow as such in pg_stat_user_tables) in addition to autovacuum during \nthe day. Here are my autovacuum settings, but when I checked \nlast_autovacuum & last_autoanalyze in pg_stat_user_tables those fields \nwere blank. \n \nautovacuum = \non \nlog_autovacuum_min_duration = 10 \nautovacuum_max_workers = \n3 \nautovacuum_naptime = \n1min \nautovacuum_vacuum_threshold = 50 \nautovacuum_analyze_threshold = 50 \nautovacuum_vacuum_scale_factor = 0.2 \nautovacuum_analyze_scale_factor = 0.1 \nautovacuum_freeze_max_age = 200000000 \nautovacuum_vacuum_cost_delay = 10ms (changed earlier today from \n1000ms) autovacuum_vacuum_cost_limit = -1\n \nwal_level = minimal\nwal_buffers = 16MB\n \nThe only recent change was moving the 3 databases \nwe have from multiple raid 1 drives with tablespaces spread all over to one \nlarge raid10 with indexes and data in pg_default. WAL for this table was moved \nas well.\n \nDoes anyone have any suggestions on where to look \nfor the problem? \n \nclientlog table info:\n \nSize: 1.94G\n \n Column \n| \nType | \nModifiers \n----------+-----------------------------+----------- pid0 \n| \ninteger \n| not null rid | \ninteger \n| not null verb | character \nvarying(32) | not \nnull noun | character \nvarying(32) | not \nnull detail | \ntext \n| path | character \nvarying(256) | not \nnull ts | timestamp without time \nzone | applies2 | \ninteger \n| toname | character \nvarying(128) | byname | \ncharacter varying(128) | \nIndexes: \"clientlog_applies2\" btree \n(applies2) \"clientlog_pid0_key\" btree \n(pid0) \"clientlog_rid_key\" btree \n(rid) \"clientlog_ts\" btree (ts)\nThe query, hardware info, and links to both \nplans:\n \nexplain analyze select max(ts) as ts from \nclientlog where applies2=256;\n \n\nProduction server:\n\n- 4 dual-core AMD Opteron 2212 processors, \n2010.485 MHz- 64GB RAM- 464GB RAID10 drive - Linux 2.6.18-164.el5 #1 \nSMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n PostgreSQL 9.0.4 on \nx86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat \n4.1.2-46), 64-bit\nhttp://explain.depesz.com/s/8R4 \n \nFrom laptop running Linux 2.6.34.9-69.fc13.868 \nwith 3G ram against a copy of the same table:\nPostgreSQL 9.0.2 on i686-pc-linux-gnu, compiled \nby GCC gcc (GCC) 4.4.4 20100630 (Red Hat 4.4.4-10), 32-bit\n \nhttp://explain.depesz.com/s/NQl\n \nThank you,\nMidge",
"msg_date": "Fri, 3 Aug 2012 18:30:56 -0700 (PDT)",
"msg_from": "Greg Williamson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query, different plans"
},
{
"msg_contents": "\"Midge Brown\" <[email protected]> writes:\n> I'm having a problem with a query on our production server, but not on a laptop running a similar postgres version with a recent backup copy of the same table. I tried reindexing the table on the production server, but it didn't make any difference. Other queries on the same table are plenty fast. \n\nReindexing won't help that. The problem is a bad statistical estimate;\nit thinks there are about 700 rows with applies2 = 256, when there's\nreally only one. That means the \"fast\" plan is a lot faster than the\nplanner gives it credit for, and conversely the \"slow\" plan is a lot\nslower than the planner is expecting. Their estimated costs end up\nnearly the same, which makes it a bit of a chance matter which one is\npicked --- but the true costs are a lot different. So you need to fix\nthat rowcount estimate. Raising the stats target for the table might\nhelp.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 04 Aug 2012 02:26:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query, different plans"
},
{
"msg_contents": "----- Original Message ----- \n From: Tom Lane \n To: Midge Brown \n Cc: [email protected] \n Sent: Friday, August 03, 2012 11:26 PM\n Subject: Re: [PERFORM] slow query, different plans\n\n\n \"Midge Brown\" <[email protected]> writes:\n > I'm having a problem with a query on our production server, but not on a laptop running a similar postgres version with a recent backup copy of the same table. I tried reindexing the table on the production server, but it didn't make any difference. Other queries on the same table are plenty fast. \n\n Reindexing won't help that. The problem is a bad statistical estimate;\n it thinks there are about 700 rows with applies2 = 256, when there's\n really only one. That means the \"fast\" plan is a lot faster than the\n planner gives it credit for, and conversely the \"slow\" plan is a lot\n slower than the planner is expecting. Their estimated costs end up\n nearly the same, which makes it a bit of a chance matter which one is\n picked --- but the true costs are a lot different. So you need to fix\n that rowcount estimate. Raising the stats target for the table might\n help.\n\n regards, tom lane\n\n -- \n\n I added \"and ts is not null\" to the query and the planner came back with .102 ms. The problem area in production went from a 10 second response to < 1 second. \n\n Thanks for the responses.\n\n -Midge\n\n\n\n\n\n\n----- Original Message ----- \n\nFrom:\nTom Lane \nTo: Midge Brown \nCc: [email protected]\n\nSent: Friday, August 03, 2012 11:26 \n PM\nSubject: Re: [PERFORM] slow query, \n different plans\n\n\"Midge Brown\" <[email protected]> \n writes:> I'm having a problem with a query on our production server, \n but not on a laptop running a similar postgres version with a recent backup \n copy of the same table. I tried reindexing the table on the production server, \n but it didn't make any difference. Other queries on the same table are plenty \n fast. Reindexing won't help that. The problem is a bad \n statistical estimate;it thinks there are about 700 rows with applies2 = \n 256, when there'sreally only one. That means the \"fast\" plan is a \n lot faster than theplanner gives it credit for, and conversely the \"slow\" \n plan is a lotslower than the planner is expecting. Their estimated \n costs end upnearly the same, which makes it a bit of a chance matter which \n one ispicked --- but the true costs are a lot different. So you need \n to fixthat rowcount estimate. Raising the stats target for the table \n mighthelp.regards, tom lane-- \nI added \"and ts is not null\" to the query \n and the planner came back with .102 ms. The problem area in production went \n from a 10 second response to < 1 second. \n \nThanks for the responses.\n \n-Midge",
"msg_date": "Mon, 6 Aug 2012 11:36:43 -0700",
"msg_from": "\"Midge Brown\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow query, different plans"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nI am planning a Postgres migration from 8.4 to 9.1 to be able to leverage\nthe replication features available in the 9.1 version. I would like to\nunderstand the following things in this regard:\n\n \n\n1. Any good documentation which should help in this upgrade.\n\n2. To be able to replicate the complete steps in a test environment\nbefore doing it in LIVE which is running 9.0, is it possible to revert this\ndatabase to 8.4 and then upgrade to 9.1. \n\n3. Any known issues and changes required to be done in the application\nfor this upgrade.\n\n \n\nThanks,\nrajiv\n\n\nHi, I am planning a Postgres migration from 8.4 to 9.1 to be able to leverage the replication features available in the 9.1 version. I would like to understand the following things in this regard: 1. Any good documentation which should help in this upgrade.2. To be able to replicate the complete steps in a test environment before doing it in LIVE which is running 9.0, is it possible to revert this database to 8.4 and then upgrade to 9.1. 3. Any known issues and changes required to be done in the application for this upgrade. Thanks,rajiv",
"msg_date": "Mon, 6 Aug 2012 11:08:34 +0530",
"msg_from": "\"Rajiv Kasera\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres Upgrade from 8.4 to 9.1"
},
{
"msg_contents": "On 08/06/2012 01:38 PM, Rajiv Kasera wrote:\n>\n> Hi,\n>\n> I am planning a Postgres migration from 8.4 to 9.1 to be able to \n> leverage the replication features available in the 9.1 version. I \n> would like to understand the following things in this regard:\n>\n> 1.Any good documentation which should help in this upgrade.\n>\nThe most important documentation here is the release notes for the major \n.0 versions:\n\nhttp://www.postgresql.org/docs/current/static/release-9-1.html \n<http://www.postgresql.org/docs/9.1/static/release-9-1.html>\nhttp://www.postgresql.org/docs/ \n<http://www.postgresql.org/docs/9.1/static/release-9-0.html>current \n<http://www.postgresql.org/docs/9.1/static/release-9-1.html>/static/release-9-0.html \n<http://www.postgresql.org/docs/9.1/static/release-9-0.html>\n\n... in particular the \"Migration to\" sections.\n\n> 2.To be able to replicate the complete steps in a test environment \n> before doing it in LIVE which is running 9.0, is it possible to revert \n> this database to 8.4 and then upgrade to 9.1.\n>\nNo. Take a copy before upgrading. Always. Keep the copy read-only in \nsome safe place.\n\nIf you want to revert, make a copy of your backup and use that.\n\nUpgrades from 8.4 to 9.0 or 9.1 require a dump and reload or the use of \nthe pg_upgrade tool. You can't just install the new version and start it \non your old database.\n\n> 3.Any known issues and changes required to be done in the application \n> for this upgrade.\n>\n\nSee the release notes.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 08/06/2012 01:38 PM, Rajiv Kasera\n wrote:\n\n\n\n\n\nHi,\n \nI am planning a Postgres migration from 8.4\n to 9.1 to be able to leverage the replication features\n available in the 9.1 version. I would like to understand the\n following things in this regard:\n \n1. Any\n good documentation which should help in this upgrade.\n\n\n The most important documentation here is the release notes for the\n major .0 versions:\n\n\nhttp://www.postgresql.org/docs/current/static/release-9-1.html\n\nhttp://www.postgresql.org/docs/current/static/release-9-0.html\n\n ... in particular the \"Migration to\" sections.\n\n\n\n2. To\n be able to replicate the complete steps in a test environment\n before doing it in LIVE which is running 9.0, is it possible\n to revert this database to 8.4 and then upgrade to 9.1. \n\n\n No. Take a copy before upgrading. Always. Keep the copy read-only in\n some safe place.\n\n If you want to revert, make a copy of your backup and use that.\n\n Upgrades from 8.4 to 9.0 or 9.1 require a dump and reload or the use\n of the pg_upgrade tool. You can't just install the new version and\n start it on your old database.\n\n\n\n3. Any\n known issues and changes required to be done in the\n application for this upgrade.\n\n\n\n See the release notes.\n\n --\n Craig Ringer",
"msg_date": "Wed, 08 Aug 2012 14:50:26 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Upgrade from 8.4 to 9.1"
}
] |
[
{
"msg_contents": "Hi, my query is very simple:\n\nselect\n msg_id,\n msg_type,\n ship_pos_messages.pos_georef1,\n ship_pos_messages.pos_georef2,\n ship_pos_messages.pos_georef3,\n ship_pos_messages.pos_georef4,\n obj_id,\n ship_speed,\n ship_heading,\n ship_course,\n pos_point\n from\n feed_all_y2012m08.ship_pos_messages\n where\n extract('day' from msg_date_rec) = 1\n AND msg_id = any(ARRAY[7294724,14174174,22254408]);\n\nThe msg_id is the pkey on the ship_pos_messages table and in this \nexample it is working fast as it uses the pkey (primary key index) to \nmake the selection. The expplain anayze follows:\n\"Result (cost=0.00..86.16 rows=5 width=117) (actual \ntime=128.734..163.319 rows=3 loops=1)\"\n\" -> Append (cost=0.00..86.16 rows=5 width=117) (actual \ntime=128.732..163.315 rows=3 loops=1)\"\n\" -> Seq Scan on ship_pos_messages (cost=0.00..0.00 rows=1 \nwidth=100) (actual time=0.001..0.001 rows=0 loops=1)\"\n\" Filter: ((msg_id = ANY \n('{7294724,14174174,22254408}'::integer[])) AND (date_part('day'::text, \nmsg_date_rec) = 1::double precision))\"\n\" -> Seq Scan on ship_a_pos_messages ship_pos_messages \n(cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 \nloops=1)\"\n\" Filter: ((msg_id = ANY \n('{7294724,14174174,22254408}'::integer[])) AND (date_part('day'::text, \nmsg_date_rec) = 1::double precision))\"\n\" -> Bitmap Heap Scan on ship_b_std_pos_messages \nship_pos_messages (cost=13.41..25.42 rows=1 width=128) (actual \ntime=49.127..49.127 rows=0 loops=1)\"\n\" Recheck Cond: (msg_id = ANY \n('{7294724,14174174,22254408}'::integer[]))\"\n\" Filter: (date_part('day'::text, msg_date_rec) = 1::double \nprecision)\"\n\" -> Bitmap Index Scan on ship_b_std_pos_messages_pkey \n(cost=0.00..13.41 rows=3 width=0) (actual time=49.125..49.125 rows=0 \nloops=1)\"\n\" Index Cond: (msg_id = ANY \n('{7294724,14174174,22254408}'::integer[]))\"\n\" -> Bitmap Heap Scan on ship_b_ext_pos_messages \nship_pos_messages (cost=12.80..24.62 rows=1 width=128) (actual \ntime=0.029..0.029 rows=0 loops=1)\"\n\" Recheck Cond: (msg_id = ANY \n('{7294724,14174174,22254408}'::integer[]))\"\n\" Filter: (date_part('day'::text, msg_date_rec) = 1::double \nprecision)\"\n\" -> Bitmap Index Scan on ship_b_ext_pos_messages_pkey \n(cost=0.00..12.80 rows=3 width=0) (actual time=0.027..0.027 rows=0 loops=1)\"\n\" Index Cond: (msg_id = ANY \n('{7294724,14174174,22254408}'::integer[]))\"\n\" -> Bitmap Heap Scan on ship_a_pos_messages_wk0 \nship_pos_messages (cost=24.08..36.12 rows=1 width=128) (actual \ntime=79.572..114.152 rows=3 loops=1)\"\n\" Recheck Cond: (msg_id = ANY \n('{7294724,14174174,22254408}'::integer[]))\"\n\" Filter: (date_part('day'::text, msg_date_rec) = 1::double \nprecision)\"\n\" -> Bitmap Index Scan on ship_a_pos_messages_wk0_pkey \n(cost=0.00..24.08 rows=3 width=0) (actual time=67.441..67.441 rows=3 \nloops=1)\"\n\" Index Cond: (msg_id = ANY \n('{7294724,14174174,22254408}'::integer[]))\"\n\"Total runtime: 180.146 ms\"\n\nI think this is a pretty good plan and quite quick given the size of the \ntable (88Million rows at present). However in real life the parameter \nwhere I search for msg_id is not an array of 3 ids but of 300.000 or \nmore. It is then that the query forgets the plan and goes to sequential \nscan. Is there any way around? Or is this the best I can have?\n\nKind Regards\nYiannis\n\n\n\n\n\n\n Hi, my query is very simple:\n\n select \n msg_id,\n msg_type,\n ship_pos_messages.pos_georef1,\n ship_pos_messages.pos_georef2,\n ship_pos_messages.pos_georef3,\n ship_pos_messages.pos_georef4,\n obj_id,\n ship_speed,\n ship_heading,\n ship_course,\n pos_point\n from \n feed_all_y2012m08.ship_pos_messages \n where \n extract('day' from msg_date_rec) = 1\n AND msg_id = any(ARRAY[7294724,14174174,22254408]);\n\nThe msg_id is the pkey on the ship_pos_messages table and in\n this example it is working fast as it uses the pkey (primary key\n index) to make the selection. The expplain anayze follows:\n \"Result (cost=0.00..86.16 rows=5 width=117) (actual\n time=128.734..163.319 rows=3 loops=1)\"\n \" -> Append (cost=0.00..86.16 rows=5 width=117) (actual\n time=128.732..163.315 rows=3 loops=1)\"\n \" -> Seq Scan on ship_pos_messages (cost=0.00..0.00\n rows=1 width=100) (actual time=0.001..0.001 rows=0 loops=1)\"\n \" Filter: ((msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[])) AND\n (date_part('day'::text, msg_date_rec) = 1::double precision))\"\n \" -> Seq Scan on ship_a_pos_messages ship_pos_messages \n (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000\n rows=0 loops=1)\"\n \" Filter: ((msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[])) AND\n (date_part('day'::text, msg_date_rec) = 1::double precision))\"\n \" -> Bitmap Heap Scan on ship_b_std_pos_messages\n ship_pos_messages (cost=13.41..25.42 rows=1 width=128) (actual\n time=49.127..49.127 rows=0 loops=1)\"\n \" Recheck Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 1::double precision)\"\n \" -> Bitmap Index Scan on\n ship_b_std_pos_messages_pkey (cost=0.00..13.41 rows=3 width=0)\n (actual time=49.125..49.125 rows=0 loops=1)\"\n \" Index Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" -> Bitmap Heap Scan on ship_b_ext_pos_messages\n ship_pos_messages (cost=12.80..24.62 rows=1 width=128) (actual\n time=0.029..0.029 rows=0 loops=1)\"\n \" Recheck Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 1::double precision)\"\n \" -> Bitmap Index Scan on\n ship_b_ext_pos_messages_pkey (cost=0.00..12.80 rows=3 width=0)\n (actual time=0.027..0.027 rows=0 loops=1)\"\n \" Index Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" -> Bitmap Heap Scan on ship_a_pos_messages_wk0\n ship_pos_messages (cost=24.08..36.12 rows=1 width=128) (actual\n time=79.572..114.152 rows=3 loops=1)\"\n \" Recheck Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 1::double precision)\"\n \" -> Bitmap Index Scan on\n ship_a_pos_messages_wk0_pkey (cost=0.00..24.08 rows=3 width=0)\n (actual time=67.441..67.441 rows=3 loops=1)\"\n \" Index Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \"Total runtime: 180.146 ms\"\n\n I think this is a pretty good plan and quite quick given the size of\n the table (88Million rows at present). However in real life the\n parameter where I search for msg_id is not an array of 3 ids but of\n 300.000 or more. It is then that the query forgets the plan and goes\n to sequential scan. Is there any way around? Or is this the best I\n can have?\n\n Kind Regards\n Yiannis",
"msg_date": "Mon, 06 Aug 2012 16:08:05 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sequential scan instead of index scan"
},
{
"msg_contents": "Hi Yiannis,\n\nIs there anything linking these ids together, or are the relatively \nrandom? If they are relatively random, the rows are likely to be \nsprinkled amongst many blocks and so a seq scan is the fastest. I've \nseen similar problems with indexed queries in a multi-tennant database \nwhere the data is so fragmented that once the record volume hits a \ncertain threshold, Postgres decides to table scan rather than use an index.\n\nThe query optimiser is unlikely to be able to determine the disk \nlocality of 300k rows and so it just takes a punt on a seq scan.\n\nIf you added another filter condition on something indexed e.g. last \nweek or last month or location or something, you might do better if the \ndata does exhibit disk locality. If the data really is scattered, then \na seq scan really will be quicker.\n\nRegards, David\n\nOn 06/08/12 23:08, Ioannis Anagnostopoulos wrote:\n> Hi, my query is very simple:\n>\n> select\n> msg_id,\n> msg_type,\n> ship_pos_messages.pos_georef1,\n> ship_pos_messages.pos_georef2,\n> ship_pos_messages.pos_georef3,\n> ship_pos_messages.pos_georef4,\n> obj_id,\n> ship_speed,\n> ship_heading,\n> ship_course,\n> pos_point\n> from\n> feed_all_y2012m08.ship_pos_messages\n> where\n> extract('day' from msg_date_rec) = 1\n> AND msg_id = any(ARRAY[7294724,14174174,22254408]);\n>\n> The msg_id is the pkey on the ship_pos_messages table and in this \n> example it is working fast as it uses the pkey (primary key index) to \n> make the selection. The expplain anayze follows:\n> \"Result (cost=0.00..86.16 rows=5 width=117) (actual \n> time=128.734..163.319 rows=3 loops=1)\"\n> \" -> Append (cost=0.00..86.16 rows=5 width=117) (actual \n> time=128.732..163.315 rows=3 loops=1)\"\n> \" -> Seq Scan on ship_pos_messages (cost=0.00..0.00 rows=1 \n> width=100) (actual time=0.001..0.001 rows=0 loops=1)\"\n> \" Filter: ((msg_id = ANY \n> ('{7294724,14174174,22254408}'::integer[])) AND \n> (date_part('day'::text, msg_date_rec) = 1::double precision))\"\n> \" -> Seq Scan on ship_a_pos_messages ship_pos_messages \n> (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 \n> loops=1)\"\n> \" Filter: ((msg_id = ANY \n> ('{7294724,14174174,22254408}'::integer[])) AND \n> (date_part('day'::text, msg_date_rec) = 1::double precision))\"\n> \" -> Bitmap Heap Scan on ship_b_std_pos_messages \n> ship_pos_messages (cost=13.41..25.42 rows=1 width=128) (actual \n> time=49.127..49.127 rows=0 loops=1)\"\n> \" Recheck Cond: (msg_id = ANY \n> ('{7294724,14174174,22254408}'::integer[]))\"\n> \" Filter: (date_part('day'::text, msg_date_rec) = \n> 1::double precision)\"\n> \" -> Bitmap Index Scan on ship_b_std_pos_messages_pkey \n> (cost=0.00..13.41 rows=3 width=0) (actual time=49.125..49.125 rows=0 \n> loops=1)\"\n> \" Index Cond: (msg_id = ANY \n> ('{7294724,14174174,22254408}'::integer[]))\"\n> \" -> Bitmap Heap Scan on ship_b_ext_pos_messages \n> ship_pos_messages (cost=12.80..24.62 rows=1 width=128) (actual \n> time=0.029..0.029 rows=0 loops=1)\"\n> \" Recheck Cond: (msg_id = ANY \n> ('{7294724,14174174,22254408}'::integer[]))\"\n> \" Filter: (date_part('day'::text, msg_date_rec) = \n> 1::double precision)\"\n> \" -> Bitmap Index Scan on ship_b_ext_pos_messages_pkey \n> (cost=0.00..12.80 rows=3 width=0) (actual time=0.027..0.027 rows=0 \n> loops=1)\"\n> \" Index Cond: (msg_id = ANY \n> ('{7294724,14174174,22254408}'::integer[]))\"\n> \" -> Bitmap Heap Scan on ship_a_pos_messages_wk0 \n> ship_pos_messages (cost=24.08..36.12 rows=1 width=128) (actual \n> time=79.572..114.152 rows=3 loops=1)\"\n> \" Recheck Cond: (msg_id = ANY \n> ('{7294724,14174174,22254408}'::integer[]))\"\n> \" Filter: (date_part('day'::text, msg_date_rec) = \n> 1::double precision)\"\n> \" -> Bitmap Index Scan on ship_a_pos_messages_wk0_pkey \n> (cost=0.00..24.08 rows=3 width=0) (actual time=67.441..67.441 rows=3 \n> loops=1)\"\n> \" Index Cond: (msg_id = ANY \n> ('{7294724,14174174,22254408}'::integer[]))\"\n> \"Total runtime: 180.146 ms\"\n>\n> I think this is a pretty good plan and quite quick given the size of \n> the table (88Million rows at present). However in real life the \n> parameter where I search for msg_id is not an array of 3 ids but of \n> 300.000 or more. It is then that the query forgets the plan and goes \n> to sequential scan. Is there any way around? Or is this the best I can \n> have?\n>\n> Kind Regards\n> Yiannis\n\n\n\n\n\n\n\n Hi Yiannis,\n\n Is there anything linking these ids together, or are the relatively\n random? If they are relatively random, the rows are likely to be\n sprinkled amongst many blocks and so a seq scan is the fastest. \n I've seen similar problems with indexed queries in a multi-tennant\n database where the data is so fragmented that once the record volume\n hits a certain threshold, Postgres decides to table scan rather than\n use an index.\n\n The query optimiser is unlikely to be able to determine the disk\n locality of 300k rows and so it just takes a punt on a seq scan.\n\n If you added another filter condition on something indexed e.g. last\n week or last month or location or something, you might do better if\n the data does exhibit disk locality. If the data really is\n scattered, then a seq scan really will be quicker.\n\n Regards, David\n\nOn 06/08/12 23:08, Ioannis\n Anagnostopoulos wrote:\n\n\n\n Hi, my query is very simple:\n\n select \n msg_id,\n msg_type,\n ship_pos_messages.pos_georef1,\n ship_pos_messages.pos_georef2,\n ship_pos_messages.pos_georef3,\n ship_pos_messages.pos_georef4,\n obj_id,\n ship_speed,\n ship_heading,\n ship_course,\n pos_point\n from \n feed_all_y2012m08.ship_pos_messages \n where \n extract('day' from msg_date_rec) = 1\n AND msg_id = any(ARRAY[7294724,14174174,22254408]);\n\nThe msg_id is the pkey on the ship_pos_messages table and\n in this example it is working fast as it uses the pkey (primary\n key index) to make the selection. The expplain anayze follows:\n \"Result (cost=0.00..86.16 rows=5 width=117) (actual\n time=128.734..163.319 rows=3 loops=1)\"\n \" -> Append (cost=0.00..86.16 rows=5 width=117) (actual\n time=128.732..163.315 rows=3 loops=1)\"\n \" -> Seq Scan on ship_pos_messages (cost=0.00..0.00\n rows=1 width=100) (actual time=0.001..0.001 rows=0 loops=1)\"\n \" Filter: ((msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[])) AND\n (date_part('day'::text, msg_date_rec) = 1::double precision))\"\n \" -> Seq Scan on ship_a_pos_messages\n ship_pos_messages (cost=0.00..0.00 rows=1 width=100) (actual\n time=0.000..0.000 rows=0 loops=1)\"\n \" Filter: ((msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[])) AND\n (date_part('day'::text, msg_date_rec) = 1::double precision))\"\n \" -> Bitmap Heap Scan on ship_b_std_pos_messages\n ship_pos_messages (cost=13.41..25.42 rows=1 width=128) (actual\n time=49.127..49.127 rows=0 loops=1)\"\n \" Recheck Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 1::double precision)\"\n \" -> Bitmap Index Scan on\n ship_b_std_pos_messages_pkey (cost=0.00..13.41 rows=3 width=0)\n (actual time=49.125..49.125 rows=0 loops=1)\"\n \" Index Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" -> Bitmap Heap Scan on ship_b_ext_pos_messages\n ship_pos_messages (cost=12.80..24.62 rows=1 width=128) (actual\n time=0.029..0.029 rows=0 loops=1)\"\n \" Recheck Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 1::double precision)\"\n \" -> Bitmap Index Scan on\n ship_b_ext_pos_messages_pkey (cost=0.00..12.80 rows=3 width=0)\n (actual time=0.027..0.027 rows=0 loops=1)\"\n \" Index Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" -> Bitmap Heap Scan on ship_a_pos_messages_wk0\n ship_pos_messages (cost=24.08..36.12 rows=1 width=128) (actual\n time=79.572..114.152 rows=3 loops=1)\"\n \" Recheck Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 1::double precision)\"\n \" -> Bitmap Index Scan on\n ship_a_pos_messages_wk0_pkey (cost=0.00..24.08 rows=3 width=0)\n (actual time=67.441..67.441 rows=3 loops=1)\"\n \" Index Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \"Total runtime: 180.146 ms\"\n\n I think this is a pretty good plan and quite quick given the size\n of the table (88Million rows at present). However in real life the\n parameter where I search for msg_id is not an array of 3 ids but\n of 300.000 or more. It is then that the query forgets the plan and\n goes to sequential scan. Is there any way around? Or is this the\n best I can have?\n\n Kind Regards\n Yiannis",
"msg_date": "Mon, 06 Aug 2012 23:16:29 +0800",
"msg_from": "David Barton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequential scan instead of index scan"
},
{
"msg_contents": "They are random as the data are coming from multiple threads that are \ninserting in the database. I see what you say about \"linking them\", and \nI may give it a try with the date. The other think that \"links\" them \ntogether is the 4 georef fields, however at that stage I am trying to \ncollect statistics on the georefs population of \"msg_id\" so I don't know \nbefore hand the values to limit my query on them... Do you think an \nindex on \"date, msg_id\" might do something?\n\nYiannis\n\nOn 06/08/2012 16:16, David Barton wrote:\n> Hi Yiannis,\n>\n> Is there anything linking these ids together, or are the relatively \n> random? If they are relatively random, the rows are likely to be \n> sprinkled amongst many blocks and so a seq scan is the fastest. I've \n> seen similar problems with indexed queries in a multi-tennant database \n> where the data is so fragmented that once the record volume hits a \n> certain threshold, Postgres decides to table scan rather than use an \n> index.\n>\n> The query optimiser is unlikely to be able to determine the disk \n> locality of 300k rows and so it just takes a punt on a seq scan.\n>\n> If you added another filter condition on something indexed e.g. last \n> week or last month or location or something, you might do better if \n> the data does exhibit disk locality. If the data really is scattered, \n> then a seq scan really will be quicker.\n>\n> Regards, David\n>\n> On 06/08/12 23:08, Ioannis Anagnostopoulos wrote:\n>> Hi, my query is very simple:\n>>\n>> select\n>> msg_id,\n>> msg_type,\n>> ship_pos_messages.pos_georef1,\n>> ship_pos_messages.pos_georef2,\n>> ship_pos_messages.pos_georef3,\n>> ship_pos_messages.pos_georef4,\n>> obj_id,\n>> ship_speed,\n>> ship_heading,\n>> ship_course,\n>> pos_point\n>> from\n>> feed_all_y2012m08.ship_pos_messages\n>> where\n>> extract('day' from msg_date_rec) = 1\n>> AND msg_id = any(ARRAY[7294724,14174174,22254408]);\n>>\n>> The msg_id is the pkey on the ship_pos_messages table and in this \n>> example it is working fast as it uses the pkey (primary key index) to \n>> make the selection. The expplain anayze follows:\n>> \"Result (cost=0.00..86.16 rows=5 width=117) (actual \n>> time=128.734..163.319 rows=3 loops=1)\"\n>> \" -> Append (cost=0.00..86.16 rows=5 width=117) (actual \n>> time=128.732..163.315 rows=3 loops=1)\"\n>> \" -> Seq Scan on ship_pos_messages (cost=0.00..0.00 rows=1 \n>> width=100) (actual time=0.001..0.001 rows=0 loops=1)\"\n>> \" Filter: ((msg_id = ANY \n>> ('{7294724,14174174,22254408}'::integer[])) AND \n>> (date_part('day'::text, msg_date_rec) = 1::double precision))\"\n>> \" -> Seq Scan on ship_a_pos_messages ship_pos_messages \n>> (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 \n>> loops=1)\"\n>> \" Filter: ((msg_id = ANY \n>> ('{7294724,14174174,22254408}'::integer[])) AND \n>> (date_part('day'::text, msg_date_rec) = 1::double precision))\"\n>> \" -> Bitmap Heap Scan on ship_b_std_pos_messages \n>> ship_pos_messages (cost=13.41..25.42 rows=1 width=128) (actual \n>> time=49.127..49.127 rows=0 loops=1)\"\n>> \" Recheck Cond: (msg_id = ANY \n>> ('{7294724,14174174,22254408}'::integer[]))\"\n>> \" Filter: (date_part('day'::text, msg_date_rec) = \n>> 1::double precision)\"\n>> \" -> Bitmap Index Scan on ship_b_std_pos_messages_pkey \n>> (cost=0.00..13.41 rows=3 width=0) (actual time=49.125..49.125 rows=0 \n>> loops=1)\"\n>> \" Index Cond: (msg_id = ANY \n>> ('{7294724,14174174,22254408}'::integer[]))\"\n>> \" -> Bitmap Heap Scan on ship_b_ext_pos_messages \n>> ship_pos_messages (cost=12.80..24.62 rows=1 width=128) (actual \n>> time=0.029..0.029 rows=0 loops=1)\"\n>> \" Recheck Cond: (msg_id = ANY \n>> ('{7294724,14174174,22254408}'::integer[]))\"\n>> \" Filter: (date_part('day'::text, msg_date_rec) = \n>> 1::double precision)\"\n>> \" -> Bitmap Index Scan on ship_b_ext_pos_messages_pkey \n>> (cost=0.00..12.80 rows=3 width=0) (actual time=0.027..0.027 rows=0 \n>> loops=1)\"\n>> \" Index Cond: (msg_id = ANY \n>> ('{7294724,14174174,22254408}'::integer[]))\"\n>> \" -> Bitmap Heap Scan on ship_a_pos_messages_wk0 \n>> ship_pos_messages (cost=24.08..36.12 rows=1 width=128) (actual \n>> time=79.572..114.152 rows=3 loops=1)\"\n>> \" Recheck Cond: (msg_id = ANY \n>> ('{7294724,14174174,22254408}'::integer[]))\"\n>> \" Filter: (date_part('day'::text, msg_date_rec) = \n>> 1::double precision)\"\n>> \" -> Bitmap Index Scan on ship_a_pos_messages_wk0_pkey \n>> (cost=0.00..24.08 rows=3 width=0) (actual time=67.441..67.441 rows=3 \n>> loops=1)\"\n>> \" Index Cond: (msg_id = ANY \n>> ('{7294724,14174174,22254408}'::integer[]))\"\n>> \"Total runtime: 180.146 ms\"\n>>\n>> I think this is a pretty good plan and quite quick given the size of \n>> the table (88Million rows at present). However in real life the \n>> parameter where I search for msg_id is not an array of 3 ids but of \n>> 300.000 or more. It is then that the query forgets the plan and goes \n>> to sequential scan. Is there any way around? Or is this the best I \n>> can have?\n>>\n>> Kind Regards\n>> Yiannis\n>\n\n\n\n\n\n\n\nThey are random as the data are coming\n from multiple threads that are inserting in the database. I see\n what you say about \"linking them\", and I may give it a try with\n the date. The other think that \"links\" them together is the 4\n georef fields, however at that stage I am trying to collect\n statistics on the georefs population of \"msg_id\" so I don't know\n before hand the values to limit my query on them... Do you think\n an index on \"date, msg_id\" might do something?\n\n Yiannis\n\n On 06/08/2012 16:16, David Barton wrote:\n\n\n\n Hi Yiannis,\n\n Is there anything linking these ids together, or are the\n relatively random? If they are relatively random, the rows are\n likely to be sprinkled amongst many blocks and so a seq scan is\n the fastest. I've seen similar problems with indexed queries in a\n multi-tennant database where the data is so fragmented that once\n the record volume hits a certain threshold, Postgres decides to\n table scan rather than use an index.\n\n The query optimiser is unlikely to be able to determine the disk\n locality of 300k rows and so it just takes a punt on a seq scan.\n\n If you added another filter condition on something indexed e.g.\n last week or last month or location or something, you might do\n better if the data does exhibit disk locality. If the data really\n is scattered, then a seq scan really will be quicker.\n\n Regards, David\n\nOn 06/08/12 23:08, Ioannis\n Anagnostopoulos wrote:\n\n\n\n Hi, my query is very simple:\n\n select \n msg_id,\n msg_type,\n ship_pos_messages.pos_georef1,\n ship_pos_messages.pos_georef2,\n ship_pos_messages.pos_georef3,\n ship_pos_messages.pos_georef4,\n obj_id,\n ship_speed,\n ship_heading,\n ship_course,\n pos_point\n from \n feed_all_y2012m08.ship_pos_messages \n where \n extract('day' from msg_date_rec) = 1\n AND msg_id = any(ARRAY[7294724,14174174,22254408]);\n\nThe msg_id is the pkey on the ship_pos_messages table and\n in this example it is working fast as it uses the pkey\n (primary key index) to make the selection. The expplain anayze\n follows:\n \"Result (cost=0.00..86.16 rows=5 width=117) (actual\n time=128.734..163.319 rows=3 loops=1)\"\n \" -> Append (cost=0.00..86.16 rows=5 width=117) (actual\n time=128.732..163.315 rows=3 loops=1)\"\n \" -> Seq Scan on ship_pos_messages \n (cost=0.00..0.00 rows=1 width=100) (actual time=0.001..0.001\n rows=0 loops=1)\"\n \" Filter: ((msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[])) AND\n (date_part('day'::text, msg_date_rec) = 1::double precision))\"\n \" -> Seq Scan on ship_a_pos_messages\n ship_pos_messages (cost=0.00..0.00 rows=1 width=100) (actual\n time=0.000..0.000 rows=0 loops=1)\"\n \" Filter: ((msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[])) AND\n (date_part('day'::text, msg_date_rec) = 1::double precision))\"\n \" -> Bitmap Heap Scan on ship_b_std_pos_messages\n ship_pos_messages (cost=13.41..25.42 rows=1 width=128)\n (actual time=49.127..49.127 rows=0 loops=1)\"\n \" Recheck Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 1::double precision)\"\n \" -> Bitmap Index Scan on\n ship_b_std_pos_messages_pkey (cost=0.00..13.41 rows=3\n width=0) (actual time=49.125..49.125 rows=0 loops=1)\"\n \" Index Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" -> Bitmap Heap Scan on ship_b_ext_pos_messages\n ship_pos_messages (cost=12.80..24.62 rows=1 width=128)\n (actual time=0.029..0.029 rows=0 loops=1)\"\n \" Recheck Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 1::double precision)\"\n \" -> Bitmap Index Scan on\n ship_b_ext_pos_messages_pkey (cost=0.00..12.80 rows=3\n width=0) (actual time=0.027..0.027 rows=0 loops=1)\"\n \" Index Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" -> Bitmap Heap Scan on ship_a_pos_messages_wk0\n ship_pos_messages (cost=24.08..36.12 rows=1 width=128)\n (actual time=79.572..114.152 rows=3 loops=1)\"\n \" Recheck Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 1::double precision)\"\n \" -> Bitmap Index Scan on\n ship_a_pos_messages_wk0_pkey (cost=0.00..24.08 rows=3\n width=0) (actual time=67.441..67.441 rows=3 loops=1)\"\n \" Index Cond: (msg_id = ANY\n ('{7294724,14174174,22254408}'::integer[]))\"\n \"Total runtime: 180.146 ms\"\n\n I think this is a pretty good plan and quite quick given the\n size of the table (88Million rows at present). However in real\n life the parameter where I search for msg_id is not an array of\n 3 ids but of 300.000 or more. It is then that the query forgets\n the plan and goes to sequential scan. Is there any way around?\n Or is this the best I can have?\n\n Kind Regards\n Yiannis",
"msg_date": "Mon, 06 Aug 2012 16:24:57 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequential scan instead of index scan"
},
{
"msg_contents": "Ioannis Anagnostopoulos <[email protected]> writes:\n> I think this is a pretty good plan and quite quick given the\n> size of the table (88Million rows at present). However in real\n> life the parameter where I search for msg_id is not an array of\n> 3 ids but of 300.000 or more. It is then that the query forgets\n> the plan and goes to sequential scan. Is there any way around?\n\nIf you've got that many, any(array[....]) is a bad choice. I'd try\nputting the IDs into a VALUES(...) list, or even a temporary table, and\nthen writing the query as a join. It is a serious mistake to think that\na seqscan is evil when you're dealing with joining that many rows, btw.\nWhat you should probably be looking for is a hash join plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2012 11:34:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequential scan instead of index scan"
},
{
"msg_contents": "On 06/08/2012 16:34, Tom Lane wrote:\n> Ioannis Anagnostopoulos <[email protected]> writes:\n>> I think this is a pretty good plan and quite quick given the\n>> size of the table (88Million rows at present). However in real\n>> life the parameter where I search for msg_id is not an array of\n>> 3 ids but of 300.000 or more. It is then that the query forgets\n>> the plan and goes to sequential scan. Is there any way around?\n> If you've got that many, any(array[....]) is a bad choice. I'd try\n> putting the IDs into a VALUES(...) list, or even a temporary table, and\n> then writing the query as a join. It is a serious mistake to think that\n> a seqscan is evil when you're dealing with joining that many rows, btw.\n> What you should probably be looking for is a hash join plan.\n>\n> \t\t\tregards, tom lane\nOk in that scenario we are back to square one. Following your suggestion \nmy resultant query is this (the temporary table is tmp_tbl_messages)\nselect\n ship_pos_messages.*\n from\n feed_all_y2012m08.ship_pos_messages join tmp_tbl_messages \non (ship_pos_messages.msg_id = tmp_tbl_messages.msg_id)\n where\n extract('day' from msg_date_rec) = 1\n AND date_trunc('day', msg_date_rec) = '2012-08-01';\n\nwhich gives us the following explain analyse:\n\n\"Merge Join (cost=1214220.48..3818359.46 rows=173574357 width=128) \n(actual time=465036.958..479089.731 rows=341190 loops=1)\"\n\" Merge Cond: (feed_all_y2012m08.ship_pos_messages.msg_id = \ntmp_tbl_messages.msg_id)\"\n\" -> Sort (cost=1178961.70..1179223.51 rows=104725 width=128) (actual \ntime=464796.971..476579.208 rows=19512873 loops=1)\"\n\" Sort Key: feed_all_y2012m08.ship_pos_messages.msg_id\"\n\" Sort Method: external merge Disk: 1254048kB\"\n\" -> Append (cost=0.00..1170229.60 rows=104725 width=128) \n(actual time=0.033..438682.971 rows=19512883 loops=1)\"\n\" -> Seq Scan on ship_pos_messages (cost=0.00..0.00 rows=1 \nwidth=100) (actual time=0.000..0.000 rows=0 loops=1)\"\n\" Filter: ((date_part('day'::text, msg_date_rec) = \n1::double precision) AND (date_trunc('day'::text, msg_date_rec) = \n'2012-08-01 00:00:00'::timestamp without time zone))\"\n\" -> Seq Scan on ship_a_pos_messages ship_pos_messages \n(cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 \nloops=1)\"\n\" Filter: ((date_part('day'::text, msg_date_rec) = \n1::double precision) AND (date_trunc('day'::text, msg_date_rec) = \n'2012-08-01 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using \nidx_ship_b_std_pos_messages_date_trunc on ship_b_std_pos_messages \nship_pos_messages (cost=0.00..58657.09 rows=5269 width=128) (actual \ntime=0.032..799.171 rows=986344 loops=1)\"\n\" Index Cond: (date_trunc('day'::text, msg_date_rec) \n= '2012-08-01 00:00:00'::timestamp without time zone)\"\n\" Filter: (date_part('day'::text, msg_date_rec) = \n1::double precision)\"\n\" -> Index Scan using \nidx_ship_b_ext_pos_messages_date_trunc on ship_b_ext_pos_messages \nship_pos_messages (cost=0.00..1694.64 rows=141 width=128) (actual \ntime=0.026..20.661 rows=26979 loops=1)\"\n\" Index Cond: (date_trunc('day'::text, msg_date_rec) \n= '2012-08-01 00:00:00'::timestamp without time zone)\"\n\" Filter: (date_part('day'::text, msg_date_rec) = \n1::double precision)\"\n\" -> Index Scan using \nidx_ship_a_pos_messages_wk0_date_trunc on ship_a_pos_messages_wk0 \nship_pos_messages (cost=0.00..1109877.86 rows=99313 width=128) (actual \ntime=0.029..435784.376 rows=18499560 loops=1)\"\n\" Index Cond: (date_trunc('day'::text, msg_date_rec) \n= '2012-08-01 00:00:00'::timestamp without time zone)\"\n\" Filter: (date_part('day'::text, msg_date_rec) = \n1::double precision)\"\n\" -> Sort (cost=35258.79..36087.50 rows=331486 width=8) (actual \ntime=239.908..307.576 rows=349984 loops=1)\"\n\" Sort Key: tmp_tbl_messages.msg_id\"\n\" Sort Method: quicksort Memory: 28694kB\"\n\" -> Seq Scan on tmp_tbl_messages (cost=0.00..4863.86 \nrows=331486 width=8) (actual time=0.047..55.227 rows=349984 loops=1)\"\n\"Total runtime: 479336.869 ms\"\n\n\nWhich is a Merge join and not a hash. Any ideas how to make it a hash join?\n\nKind Regards\nYiannis\n\n\n\n\n\n\nOn 06/08/2012 16:34, Tom Lane wrote:\n\n\nIoannis Anagnostopoulos <[email protected]> writes:\n\n\n I think this is a pretty good plan and quite quick given the\n size of the table (88Million rows at present). However in real\n life the parameter where I search for msg_id is not an array of\n 3 ids but of 300.000 or more. It is then that the query forgets\n the plan and goes to sequential scan. Is there any way around?\n\n\n\nIf you've got that many, any(array[....]) is a bad choice. I'd try\nputting the IDs into a VALUES(...) list, or even a temporary table, and\nthen writing the query as a join. It is a serious mistake to think that\na seqscan is evil when you're dealing with joining that many rows, btw.\nWhat you should probably be looking for is a hash join plan.\n\n\t\t\tregards, tom lane\n\n\n Ok in that scenario we are back to square one. Following your\n suggestion my resultant query is this (the temporary table is\n tmp_tbl_messages)\n select \n ��� �� ���� ship_pos_messages.*\n ��� �� �from \n ��� �� ��� �feed_all_y2012m08.ship_pos_messages join\n tmp_tbl_messages on (ship_pos_messages.msg_id =\n tmp_tbl_messages.msg_id)\n ��� �� �where \n ��� �� ��� �extract('day' from msg_date_rec) = 1\n ��� �� ��� �AND date_trunc('day', msg_date_rec) = '2012-08-01'; \n\n which gives us the following explain analyse:\n\n\"Merge Join� (cost=1214220.48..3818359.46 rows=173574357\n width=128) (actual time=465036.958..479089.731 rows=341190\n loops=1)\"\n \"� Merge Cond: (feed_all_y2012m08.ship_pos_messages.msg_id =\n tmp_tbl_messages.msg_id)\"\n \"� ->� Sort� (cost=1178961.70..1179223.51 rows=104725\n width=128) (actual time=464796.971..476579.208 rows=19512873\n loops=1)\"\n \"������� Sort Key: feed_all_y2012m08.ship_pos_messages.msg_id\"\n \"������� Sort Method:� external merge� Disk: 1254048kB\"\n \"������� ->� Append� (cost=0.00..1170229.60 rows=104725\n width=128) (actual time=0.033..438682.971 rows=19512883 loops=1)\"\n \"������������� ->� Seq Scan on ship_pos_messages�\n (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000\n rows=0 loops=1)\"\n \"������������������� Filter: ((date_part('day'::text,\n msg_date_rec) = 1::double precision) AND (date_trunc('day'::text,\n msg_date_rec) = '2012-08-01 00:00:00'::timestamp without time\n zone))\"\n \"������������� ->� Seq Scan on ship_a_pos_messages\n ship_pos_messages� (cost=0.00..0.00 rows=1 width=100) (actual\n time=0.000..0.000 rows=0 loops=1)\"\n \"������������������� Filter: ((date_part('day'::text,\n msg_date_rec) = 1::double precision) AND (date_trunc('day'::text,\n msg_date_rec) = '2012-08-01 00:00:00'::timestamp without time\n zone))\"\n \"������������� ->� Index Scan using\n idx_ship_b_std_pos_messages_date_trunc on ship_b_std_pos_messages\n ship_pos_messages� (cost=0.00..58657.09 rows=5269 width=128)\n (actual time=0.032..799.171 rows=986344 loops=1)\"\n \"������������������� Index Cond: (date_trunc('day'::text,\n msg_date_rec) = '2012-08-01 00:00:00'::timestamp without time\n zone)\"\n \"������������������� Filter: (date_part('day'::text, msg_date_rec)\n = 1::double precision)\"\n \"������������� ->� Index Scan using\n idx_ship_b_ext_pos_messages_date_trunc on ship_b_ext_pos_messages\n ship_pos_messages� (cost=0.00..1694.64 rows=141 width=128) (actual\n time=0.026..20.661 rows=26979 loops=1)\"\n \"������������������� Index Cond: (date_trunc('day'::text,\n msg_date_rec) = '2012-08-01 00:00:00'::timestamp without time\n zone)\"\n \"������������������� Filter: (date_part('day'::text, msg_date_rec)\n = 1::double precision)\"\n \"������������� ->� Index Scan using\n idx_ship_a_pos_messages_wk0_date_trunc on ship_a_pos_messages_wk0\n ship_pos_messages� (cost=0.00..1109877.86 rows=99313 width=128)\n (actual time=0.029..435784.376 rows=18499560 loops=1)\"\n \"������������������� Index Cond: (date_trunc('day'::text,\n msg_date_rec) = '2012-08-01 00:00:00'::timestamp without time\n zone)\"\n \"������������������� Filter: (date_part('day'::text, msg_date_rec)\n = 1::double precision)\"\n \"� ->� Sort� (cost=35258.79..36087.50 rows=331486 width=8)\n (actual time=239.908..307.576 rows=349984 loops=1)\"\n \"������� Sort Key: tmp_tbl_messages.msg_id\"\n \"������� Sort Method:� quicksort� Memory: 28694kB\"\n \"������� ->� Seq Scan on tmp_tbl_messages� (cost=0.00..4863.86\n rows=331486 width=8) (actual time=0.047..55.227 rows=349984\n loops=1)\"\n \"Total runtime: 479336.869 ms\"\n\n\n Which is a Merge join and not a hash. Any ideas how to make it a\n hash join?\n\n Kind Regards\n Yiannis",
"msg_date": "Mon, 06 Aug 2012 23:04:10 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequential scan instead of index scan"
},
{
"msg_contents": "Ioannis Anagnostopoulos <[email protected]> writes:\n> On 06/08/2012 16:34, Tom Lane wrote:\n>> What you should probably be looking for is a hash join plan.\n\n> ...\n> Which is a Merge join and not a hash. Any ideas how to make it a hash join?\n\nYou might need to ANALYZE the temp table, if you didn't already. Also\nit might be that you need to increase work_mem enough to fit the temp\ntable into memory.\n\nAnother thing that's bothering me is that the rowcount estimates are so\nfar off, particularly this one:\n\n> \" -> Index Scan using \n> idx_ship_a_pos_messages_wk0_date_trunc on ship_a_pos_messages_wk0 \n> ship_pos_messages (cost=0.00..1109877.86 rows=99313 width=128) (actual \n> time=0.029..435784.376 rows=18499560 loops=1)\"\n> \" Index Cond: (date_trunc('day'::text, msg_date_rec) \n> = '2012-08-01 00:00:00'::timestamp without time zone)\"\n> \" Filter: (date_part('day'::text, msg_date_rec) = \n> 1::double precision)\"\n\nOffhand I'd have thought that ANALYZE would gather stats on the\ndate_trunc expression (because it is indexed) and then you should get\nsomething reasonably accurate for a comparison to a constant.\n\"Reasonably accurate\" meaning \"not off by two orders of magnitude\".\nPractically all of your runtime is going into this one indexscan,\nand TBH it seems likely you'd be better off with a seqscan there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Aug 2012 23:13:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequential scan instead of index scan"
},
{
"msg_contents": "> Offhand I'd have thought that ANALYZE would gather stats on the\n> date_trunc expression (because it is indexed) and then you should get\n> something reasonably accurate for a comparison to a constant.\n> \"Reasonably accurate\" meaning \"not off by two orders of magnitude\".\n> Practically all of your runtime is going into this one indexscan,\n> and TBH it seems likely you'd be better off with a seqscan there.\n>\n> \t\t\tregards, tom lane\nYou were right, after running ANALYZE on the temp table I eventually got \nthe HASH JOIN we were talking about. Here is the plan:\n\n\"Hash Join (cost=379575.54..1507341.18 rows=95142 width=128) (actual \ntime=3128.940..634179.270 rows=10495795 loops=1)\"\n\" Hash Cond: (feed_all_y2012m08.ship_pos_messages.msg_id = \ntmp_tbl_messages.msg_id)\"\n\" -> Append (cost=0.00..1073525.24 rows=95142 width=128) (actual \ntime=37.157..599002.314 rows=18891614 loops=1)\"\n\" -> Seq Scan on ship_pos_messages (cost=0.00..0.00 rows=1 \nwidth=100) (actual time=0.001..0.001 rows=0 loops=1)\"\n\" Filter: ((date_part('day'::text, msg_date_rec) = \n2::double precision) AND (date_trunc('day'::text, msg_date_rec) = \n'2012-08-02 00:00:00'::timestamp without time zone))\"\n\" -> Seq Scan on ship_a_pos_messages ship_pos_messages \n(cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000 rows=0 \nloops=1)\"\n\" Filter: ((date_part('day'::text, msg_date_rec) = \n2::double precision) AND (date_trunc('day'::text, msg_date_rec) = \n'2012-08-02 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using idx_ship_b_std_pos_messages_date_trunc on \nship_b_std_pos_messages ship_pos_messages (cost=0.00..48111.95 \nrows=4323 width=128) (actual time=37.156..23782.030 rows=808692 loops=1)\"\n\" Index Cond: (date_trunc('day'::text, msg_date_rec) = \n'2012-08-02 00:00:00'::timestamp without time zone)\"\n\" Filter: (date_part('day'::text, msg_date_rec) = 2::double \nprecision)\"\n\" -> Index Scan using idx_ship_b_ext_pos_messages_date_trunc on \nship_b_ext_pos_messages ship_pos_messages (cost=0.00..1844.30 rows=154 \nwidth=128) (actual time=42.042..1270.104 rows=28656 loops=1)\"\n\" Index Cond: (date_trunc('day'::text, msg_date_rec) = \n'2012-08-02 00:00:00'::timestamp without time zone)\"\n\" Filter: (date_part('day'::text, msg_date_rec) = 2::double \nprecision)\"\n\" -> Index Scan using idx_ship_a_pos_messages_wk0_date_trunc on \nship_a_pos_messages_wk0 ship_pos_messages (cost=0.00..1023568.99 \nrows=90663 width=128) (actual time=51.181..571590.415 rows=18054266 \nloops=1)\"\n\" Index Cond: (date_trunc('day'::text, msg_date_rec) = \n'2012-08-02 00:00:00'::timestamp without time zone)\"\n\" Filter: (date_part('day'::text, msg_date_rec) = 2::double \nprecision)\"\n\" -> Hash (cost=177590.46..177590.46 rows=12311446 width=8) (actual \ntime=3082.762..3082.762 rows=12311446 loops=1)\"\n\" Buckets: 524288 Batches: 4 Memory Usage: 120316kB\"\n\" -> Seq Scan on tmp_tbl_messages (cost=0.00..177590.46 \nrows=12311446 width=8) (actual time=0.022..1181.376 rows=12311446 loops=1)\"\n\"Total runtime: 634764.596 ms\"\n\nThe time looks reasonable but still quite high for the over night job I \nam need it for (have to run around 30 of those). So since the join has\nbeen shorted I think I need to do something with the rows difference \nbetween actual and expected in the:\n\n\" -> Index Scan using idx_ship_a_pos_messages_wk0_date_trunc on \nship_a_pos_messages_wk0 ship_pos_messages (cost=0.00..1023568.99 \nrows=90663 width=128) (actual time=51.181..571590.415 rows=18054266 \nloops=1)\"\n\" Index Cond: (date_trunc('day'::text, msg_date_rec) = \n'2012-08-02 00:00:00'::timestamp without time zone)\"\n\" Filter: (date_part('day'::text, msg_date_rec) = 2::double \nprecision)\"\n\n From what I understand a possible solution is to increase the stats \ntarget for the particular column(?). Any suggestion there? I assume we \nare talking about the msg_date_rec where the index is build uppon.\nFinally, I do understand what you say about the Seq scan. However in \nthis case I have consistently about 10min per execution while the \nSeqScan was giving me almost nothing at best and usually it was running \nfor so long that\neventually was causing my server problems...\n\nKind Regards\nYiannis\n\n\n\n\n\n\n\n\n\nOffhand I'd have thought that ANALYZE would gather stats on the\ndate_trunc expression (because it is indexed) and then you should get\nsomething reasonably accurate for a comparison to a constant.\n\"Reasonably accurate\" meaning \"not off by two orders of magnitude\".\nPractically all of your runtime is going into this one indexscan,\nand TBH it seems likely you'd be better off with a seqscan there.\n\n\t\t\tregards, tom lane\n\n\n You were right, after running ANALYZE on the temp table I eventually\n got the HASH JOIN we were talking about. Here is the plan:\n\n\"Hash Join (cost=379575.54..1507341.18 rows=95142 width=128)\n (actual time=3128.940..634179.270 rows=10495795 loops=1)\"\n \" Hash Cond: (feed_all_y2012m08.ship_pos_messages.msg_id =\n tmp_tbl_messages.msg_id)\"\n \" -> Append (cost=0.00..1073525.24 rows=95142 width=128)\n (actual time=37.157..599002.314 rows=18891614 loops=1)\"\n \" -> Seq Scan on ship_pos_messages (cost=0.00..0.00\n rows=1 width=100) (actual time=0.001..0.001 rows=0 loops=1)\"\n \" Filter: ((date_part('day'::text, msg_date_rec) =\n 2::double precision) AND (date_trunc('day'::text, msg_date_rec) =\n '2012-08-02 00:00:00'::timestamp without time zone))\"\n \" -> Seq Scan on ship_a_pos_messages ship_pos_messages \n (cost=0.00..0.00 rows=1 width=100) (actual time=0.000..0.000\n rows=0 loops=1)\"\n \" Filter: ((date_part('day'::text, msg_date_rec) =\n 2::double precision) AND (date_trunc('day'::text, msg_date_rec) =\n '2012-08-02 00:00:00'::timestamp without time zone))\"\n \" -> Index Scan using\n idx_ship_b_std_pos_messages_date_trunc on ship_b_std_pos_messages\n ship_pos_messages (cost=0.00..48111.95 rows=4323 width=128)\n (actual time=37.156..23782.030 rows=808692 loops=1)\"\n \" Index Cond: (date_trunc('day'::text, msg_date_rec)\n = '2012-08-02 00:00:00'::timestamp without time zone)\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 2::double precision)\"\n \" -> Index Scan using\n idx_ship_b_ext_pos_messages_date_trunc on ship_b_ext_pos_messages\n ship_pos_messages (cost=0.00..1844.30 rows=154 width=128) (actual\n time=42.042..1270.104 rows=28656 loops=1)\"\n \" Index Cond: (date_trunc('day'::text, msg_date_rec)\n = '2012-08-02 00:00:00'::timestamp without time zone)\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 2::double precision)\"\n \" -> Index Scan using\n idx_ship_a_pos_messages_wk0_date_trunc on ship_a_pos_messages_wk0\n ship_pos_messages (cost=0.00..1023568.99 rows=90663 width=128)\n (actual time=51.181..571590.415 rows=18054266 loops=1)\"\n \" Index Cond: (date_trunc('day'::text, msg_date_rec)\n = '2012-08-02 00:00:00'::timestamp without time zone)\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 2::double precision)\"\n \" -> Hash (cost=177590.46..177590.46 rows=12311446 width=8)\n (actual time=3082.762..3082.762 rows=12311446 loops=1)\"\n \" Buckets: 524288 Batches: 4 Memory Usage: 120316kB\"\n \" -> Seq Scan on tmp_tbl_messages \n (cost=0.00..177590.46 rows=12311446 width=8) (actual\n time=0.022..1181.376 rows=12311446 loops=1)\"\n \"Total runtime: 634764.596 ms\"\n\n The time looks reasonable but still quite high for the over night\n job I am need it for (have to run around 30 of those). So since the\n join has \n been shorted I think I need to do something with the rows difference\n between actual and expected in the:\n\n\" -> Index Scan using\n idx_ship_a_pos_messages_wk0_date_trunc on ship_a_pos_messages_wk0\n ship_pos_messages (cost=0.00..1023568.99 rows=90663 width=128)\n (actual time=51.181..571590.415 rows=18054266 loops=1)\"\n \" Index Cond: (date_trunc('day'::text, msg_date_rec)\n = '2012-08-02 00:00:00'::timestamp without time zone)\"\n \" Filter: (date_part('day'::text, msg_date_rec) =\n 2::double precision)\"\n\nFrom what I understand a possible solution is to increase the\n stats target for the particular column(?). Any suggestion there?\n I assume we are talking about the msg_date_rec\n where the index is build uppon.\n Finally, I do understand what you say about the Seq scan.\n However in this case I have consistently about 10min per\n execution while the SeqScan was giving me almost nothing at best\n and usually it was running for so long that\n eventually was causing my server problems...\n\n Kind Regards\n Yiannis",
"msg_date": "Tue, 07 Aug 2012 13:15:06 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequential scan instead of index scan"
},
{
"msg_contents": "On Mon, Aug 6, 2012 at 8:08 AM, Ioannis Anagnostopoulos\n<[email protected]> wrote:\n> Hi, my query is very simple:\n>\n> select\n> msg_id,\n> msg_type,\n> ship_pos_messages.pos_georef1,\n> ship_pos_messages.pos_georef2,\n> ship_pos_messages.pos_georef3,\n> ship_pos_messages.pos_georef4,\n> obj_id,\n> ship_speed,\n> ship_heading,\n> ship_course,\n> pos_point\n> from\n> feed_all_y2012m08.ship_pos_messages\n> where\n> extract('day' from msg_date_rec) = 1\n> AND msg_id = any(ARRAY[7294724,14174174,22254408]);\n>\n> The msg_id is the pkey on the ship_pos_messages table and in this example it\n> is working fast as it uses the pkey (primary key index) to make the\n> selection. The expplain anayze follows:\n...\n>\n> I think this is a pretty good plan and quite quick given the size of the\n> table (88Million rows at present). However in real life the parameter where\n> I search for msg_id is not an array of 3 ids but of 300.000 or more. It is\n> then that the query forgets the plan and goes to sequential scan. Is there\n> any way around? Or is this the best I can have?\n\nWhat happens if you set \"enable_seqscan=off\" and run the query with\nthe very large list? (This is an experiment, not a recommendation for\nproduction use)\n\n\nCheers,\n\nJeff\n",
"msg_date": "Tue, 7 Aug 2012 09:00:55 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequential scan instead of index scan"
},
{
"msg_contents": "On 07/08/2012 17:00, Jeff Janes wrote:\n> On Mon, Aug 6, 2012 at 8:08 AM, Ioannis Anagnostopoulos\n> <[email protected]> wrote:\n>> Hi, my query is very simple:\n>>\n>> select\n>> msg_id,\n>> msg_type,\n>> ship_pos_messages.pos_georef1,\n>> ship_pos_messages.pos_georef2,\n>> ship_pos_messages.pos_georef3,\n>> ship_pos_messages.pos_georef4,\n>> obj_id,\n>> ship_speed,\n>> ship_heading,\n>> ship_course,\n>> pos_point\n>> from\n>> feed_all_y2012m08.ship_pos_messages\n>> where\n>> extract('day' from msg_date_rec) = 1\n>> AND msg_id = any(ARRAY[7294724,14174174,22254408]);\n>>\n>> The msg_id is the pkey on the ship_pos_messages table and in this example it\n>> is working fast as it uses the pkey (primary key index) to make the\n>> selection. The expplain anayze follows:\n> ...\n>> I think this is a pretty good plan and quite quick given the size of the\n>> table (88Million rows at present). However in real life the parameter where\n>> I search for msg_id is not an array of 3 ids but of 300.000 or more. It is\n>> then that the query forgets the plan and goes to sequential scan. Is there\n>> any way around? Or is this the best I can have?\n> What happens if you set \"enable_seqscan=off\" and run the query with\n> the very large list? (This is an experiment, not a recommendation for\n> production use)\n>\n>\n> Cheers,\n>\n> Jeff\nAs Tom said, the actual question is not valid. Seq scan are not bad, we \njust need to understand the way around it instead of forcing them off. \nIn my case, the problem was the ARRAY as a parameter (which all together \nis not that great for holding so many data). By converting it into a \ntemporary table and performing an inner join in the query (after \nanalysing the temp table) you get a nice Hash join (or Merge Join if you \ndon't analyse the temp table).\n\ncheers Yiannis\n",
"msg_date": "Tue, 07 Aug 2012 17:06:54 +0100",
"msg_from": "Ioannis Anagnostopoulos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequential scan instead of index scan"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 9:06 AM, Ioannis Anagnostopoulos\n<[email protected]> wrote:\n> On 07/08/2012 17:00, Jeff Janes wrote:\n>>\n>> What happens if you set \"enable_seqscan=off\" and run the query with\n>> the very large list? (This is an experiment, not a recommendation for\n>> production use)\n>>\n>>\n>> Cheers,\n>>\n>> Jeff\n>\n> As Tom said, the actual question is not valid. Seq scan are not bad,\n\nRight, that is why I proposed it as an experiment, not for production use.\n\n> we just\n> need to understand the way around it instead of forcing them off.\n\nI think the first step to understanding the way around it is to force\nit off, and see what the planner thinks it's next best option is, and\nwhy it thinks that.\n\n\n> In my\n> case, the problem was the ARRAY as a parameter (which all together is not\n> that great for holding so many data).\n\nI think the only thing that is great for holding that much data is a\nquery against live permanent tables which returns it. Given the\nchoice between stuffing it in an ARRAY and stuffing it in a temp table\nand then manually analyzing it, neither one of those seems\nfundamentally better than the other at the scale of 300,000.\n\n\n> By converting it into a temporary\n> table and performing an inner join in the query (after analysing the temp\n> table) you get a nice Hash join (or Merge Join if you don't analyse the temp\n> table).\n\nI don't see those as being very good. The \"primary key\" part of the\nquery is far more selective than the date part, so what you are doing\nis fetching a huge number of rows only to throw out the vast majority\nof them.\n\nI think the optimal plan would be a bitmap scan on the indexes of the\n\"primary key\" column. This should automatically take advantage of the\nsequential read nature of the table data to the extent the results are\nwell clustered, and if they aren't clustered it should benefit from\neffective_io_concurrency if that is set appropriately.\n\nCheers,\n\nJeff\n",
"msg_date": "Tue, 7 Aug 2012 10:42:08 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequential scan instead of index scan"
}
] |
[
{
"msg_contents": "Diff of config files is below. default_statistics_target in both is currently at the default of 100, though I'm going to try increasing that for this table as Tom Lane suggested. \n-Midge\n\n----- Original Message ----- \nFrom: Greg Williamson \nTo: [email protected] \nSent: Friday, August 03, 2012 6:30 PM\nSubject: Re: [PERFORM] slow query, different plans\n\n\nMidge --\n\n\nSorry for top-quoting -- challenged mail.\n\n\nPerhaps a difference in the stats estimates -- default_statistics_target ?\n\n\nCan you show us a diff between the postgres config files for each instance ? Maybe something there ...\n\n\nGreg Williamson\n\n\n\n------------------------------------------------------------------------------\n From: Midge Brown <[email protected]>\n To: [email protected] \n Sent: Friday, August 3, 2012 5:38 PM\n Subject: [PERFORM] slow query, different plans\n\n\n\n I'm having a problem with a query on our production server, but not on a laptop running a similar postgres version with a recent backup copy of the same table. I tried reindexing the table on the production server, but it didn't make any difference. Other queries on the same table are plenty fast. \n\n This query has been slow, but never like this, particularly during a period when there are only a couple of connections in use. \n\n Vacuum and analyze are run nightly (and show as such in pg_stat_user_tables) in addition to autovacuum during the day. Here are my autovacuum settings, but when I checked last_autovacuum & last_autoanalyze in pg_stat_user_tables those fields were blank. \n\n autovacuum = on \n log_autovacuum_min_duration = 10 \n autovacuum_max_workers = 3 \n autovacuum_naptime = 1min \n autovacuum_vacuum_threshold = 50 \n autovacuum_analyze_threshold = 50 \n autovacuum_vacuum_scale_factor = 0.2 \n autovacuum_analyze_scale_factor = 0.1 \n autovacuum_freeze_max_age = 200000000 \n autovacuum_vacuum_cost_delay = 10ms (changed earlier today from 1000ms) \n autovacuum_vacuum_cost_limit = -1\n\n wal_level = minimal\n wal_buffers = 16MB\n\n The only recent change was moving the 3 databases we have from multiple raid 1 drives with tablespaces spread all over to one large raid10 with indexes and data in pg_default. WAL for this table was moved as well.\n\n Does anyone have any suggestions on where to look for the problem? \n\n clientlog table info:\n\n Size: 1.94G\n\n Column | Type | Modifiers \n ----------+-----------------------------+-----------\n pid0 | integer | not null\n rid | integer | not null\n verb | character varying(32) | not null\n noun | character varying(32) | not null\n detail | text | \n path | character varying(256) | not null\n ts | timestamp without time zone | \n applies2 | integer | \n toname | character varying(128) | \n byname | character varying(128) | \n Indexes:\n \"clientlog_applies2\" btree (applies2)\n \"clientlog_pid0_key\" btree (pid0)\n \"clientlog_rid_key\" btree (rid)\n \"clientlog_ts\" btree (ts)\n\n The query, hardware info, and links to both plans:\n\n explain analyze select max(ts) as ts from clientlog where applies2=256;\n\n Production server:\n - 4 dual-core AMD Opteron 2212 processors, 2010.485 MHz\n - 64GB RAM\n - 464GB RAID10 drive \n - Linux 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux\n PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit\n\n http://explain.depesz.com/s/8R4\n \n\n From laptop running Linux 2.6.34.9-69.fc13.868 with 3G ram against a copy of the same table:\n PostgreSQL 9.0.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.4.4 20100630 (Red Hat 4.4.4-10), 32-bit\n\n http://explain.depesz.com/s/NQl\n\n Thank you,\n Midge\n\n\n ==================\n\n Here's the diff of the 2 config files. I didn't list the autovacuum settings since the laptop is a development machine with that feature turned off.\n\n 109c109\n < shared_buffers = 28MB # min 128kB\n ---\n > shared_buffers = 4GB\n 118,120c118,120\n < #work_mem = 1MB # min 64kB\n < #maintenance_work_mem = 16MB # min 1MB\n < #max_stack_depth = 2MB # min 100kB\n ---\n > work_mem = 16MB\n > maintenance_work_mem = 256MB\n > max_stack_depth = 2MB\n 130c130\n < #vacuum_cost_delay = 0ms # 0-100 milliseconds\n ---\n > vacuum_cost_delay = 10ms\n 134c134\n < #vacuum_cost_limit = 200 # 1-10000 credits\n ---\n > vacuum_cost_limit = 200 # 1-10000 credits\n 153c153\n < #wal_level = minimal # minimal, archive, or hot_standby\n ---\n > wal_level = minimal # minimal, archive, or hot_standby\n 165c165\n < wal_buffers = 64kB # min 32kB\n ---\n > wal_buffers = 16MB\n 174c174\n < checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n ---\n > checkpoint_segments = 64 # in logfile segments, min 1, 16MB each\n 176,177c176,177\n < checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\n < checkpoint_warning = 30s # 0 disables\n ---\n > checkpoint_completion_target = 0.7 # checkpoint target duration, 0.0 - 1.0\n > checkpoint_warning = 30s # 0 disables\n 231c231\n < #effective_cache_size = 128MB\n ---\n > effective_cache_size = 10GB\n 413c414\n\n\n\n\n\n\n\n\nDiff of config files is below. \ndefault_statistics_target in both is currently at the default of 100, \nthough I'm going to try increasing that for this table as Tom Lane suggested. \n \n-Midge\n \n----- Original Message ----- \nFrom: Greg \nWilliamson \nTo: [email protected]\n\nSent: Friday, August 03, 2012 6:30 PM\nSubject: Re: [PERFORM] slow query, \ndifferent plans\n\n\nMidge --\n\nSorry for top-quoting -- challenged mail.\n\nPerhaps a difference in the stats estimates \n-- default_statistics_target ?\n\nCan you show us a diff between the postgres config files for each instance \n? Maybe something there ...\n\nGreg Williamson\n\n\n\n\n\nFrom: Midge Brown <[email protected]>To: [email protected]\nSent: Friday, August 3, 2012 \n 5:38 PMSubject: [PERFORM] \n slow query, different plans\n\n\n\n\nI'm having a \n problem with a query on our production server, but not on a laptop running a \n similar postgres version with a recent backup copy of the same table. \n I tried reindexing the table on the \n production server, but it didn't make any difference. Other queries on the \n same table are plenty fast. \n \nThis query has been slow, but never like this, \n particularly during a period when there are only a couple of connections in \n use. \n \nVacuum and analyze are run nightly (and \n show as such in pg_stat_user_tables) in addition to autovacuum \n during the day. Here are my autovacuum settings, but when I checked \n last_autovacuum & last_autoanalyze in pg_stat_user_tables those \n fields were blank. \n \nautovacuum = \n on \n log_autovacuum_min_duration = 10 \n autovacuum_max_workers = \n 3 \n autovacuum_naptime = \n 1min \n autovacuum_vacuum_threshold = 50 \n autovacuum_analyze_threshold = 50 \n autovacuum_vacuum_scale_factor = 0.2 \n autovacuum_analyze_scale_factor = 0.1 \n autovacuum_freeze_max_age = 200000000 \n autovacuum_vacuum_cost_delay = 10ms (changed earlier today from \n 1000ms) autovacuum_vacuum_cost_limit = -1\n \nwal_level = minimal\nwal_buffers = 16MB\n \nThe only recent change was moving the 3 \n databases we have from multiple raid 1 drives with tablespaces spread all \n over to one large raid10 with indexes and data in pg_default. WAL for this \n table was moved as well.\n \nDoes anyone have any suggestions on where to \n look for the problem? \n \nclientlog table info:\n \nSize: 1.94G\n \n Column \n | \n Type | \n Modifiers \n ----------+-----------------------------+----------- pid0 \n | \n integer \n | not null rid | \n integer \n | not null verb | character \n varying(32) | not \n null noun | character \n varying(32) | not \n null detail | \n text \n | path | character \n varying(256) | not \n null ts | timestamp without time \n zone | applies2 | \n integer \n | toname | character \n varying(128) | byname | \n character varying(128) | \n Indexes: \"clientlog_applies2\" btree \n (applies2) \"clientlog_pid0_key\" btree \n (pid0) \"clientlog_rid_key\" btree \n (rid) \"clientlog_ts\" btree (ts)\nThe query, hardware info, and links to \n both plans:\n \nexplain analyze select max(ts) as ts from \n clientlog where applies2=256;\n \n\nProduction server:\n\n- 4 dual-core AMD Opteron 2212 processors, \n 2010.485 MHz- 64GB RAM- 464GB RAID10 drive - Linux 2.6.18-164.el5 \n #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 \n GNU/Linux\n PostgreSQL \n 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 \n (Red Hat 4.1.2-46), 64-bit\nhttp://explain.depesz.com/s/8R4 \n \nFrom laptop running Linux 2.6.34.9-69.fc13.868 \n with 3G ram against a copy of the same table:\nPostgreSQL 9.0.2 on i686-pc-linux-gnu, compiled \n by GCC gcc (GCC) 4.4.4 20100630 (Red Hat 4.4.4-10), 32-bit\n \nhttp://explain.depesz.com/s/NQl\n \nThank you,\nMidge\n \n \n==================\n \nHere's the diff of the 2 config files. I didn't \n list the autovacuum settings since the laptop is a development machine with \n that feature turned off.\n \n109c109< shared_buffers = \n 28MB \n # min 128kB---> shared_buffers = 4GB118,120c118,120< \n #work_mem = \n 1MB \n # min 64kB< #maintenance_work_mem = \n 16MB # min 1MB< \n #max_stack_depth = \n 2MB \n # min 100kB---> work_mem = 16MB> maintenance_work_mem = \n 256MB> max_stack_depth = 2MB130c130< #vacuum_cost_delay = \n 0ms \n # 0-100 milliseconds---> vacuum_cost_delay = \n 10ms134c134< #vacuum_cost_limit = \n 200 \n # 1-10000 credits---> vacuum_cost_limit = \n 200 \n # 1-10000 credits153c153< #wal_level = \n minimal \n # minimal, archive, or hot_standby---> wal_level = \n minimal \n # minimal, archive, or hot_standby165c165< wal_buffers = \n 64kB \n # min 32kB---> wal_buffers = 16MB174c174< \n checkpoint_segments = \n 3 \n # in logfile segments, min 1, 16MB each---> checkpoint_segments = \n 64 \n # in logfile segments, min 1, 16MB each176,177c176,177< \n checkpoint_completion_target = 0.5 # checkpoint target \n duration, 0.0 - 1.0< checkpoint_warning = \n 30s \n # 0 disables---> checkpoint_completion_target = \n 0.7 # checkpoint target duration, 0.0 - 1.0> \n checkpoint_warning = \n 30s \n # 0 disables231c231< #effective_cache_size = 128MB---> \n effective_cache_size = \n10GB413c414",
"msg_date": "Mon, 6 Aug 2012 10:43:13 -0700",
"msg_from": "\"Midge Brown\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow query, different plans"
}
] |
[
{
"msg_contents": "Hi\n\nI have an interesting query to be optimized related to this one [1].\n\nThe query definition is: Select all buildings that have more than 1\npharmacies and more than 1 schools within a radius of 1000m.\n\nThe problem is that I think that this query is inherently O(n^2). In\nfact the solution I propose below takes forever...\n\nMy questions:\n\n1. Any comments about the nature of this problem?\n\n2. ... on how to speed it up ?\n\n3. In the original query [1] there's a count which contains a\nsubquery. According to my tests PostgreSQL does not allow this despite\nthe documentation which says \"count(expression)\".\n\nRemarks: I know that \"count(*)\" could be faster on PostgreSQL but\n\"count(osm_id)\" does not change the query plan and this does not seem\nto be the bottleneck here anyway.\n\nYours, S.\n\n[1] http://gis.stackexchange.com/questions/11445/selecting-pois-around-specific-buildings-using-postgis\n\n\nHere's my query:\n\n-- Select all buildings that have >1 pharmacies and >1 schools within 1000m:\nSELECT osm_id AS building_id\nFROM\n (SELECT osm_id, way\n FROM osm_polygon\n WHERE tags @> hstore('building','yes')\n ) AS b\nWHERE\n (SELECT count(*) > 1\n FROM osm_poi AS p\n WHERE p.tags @> hstore('amenity','pharmacy')\n AND ST_DWithin(b.way,p.way,1000)\n )\n AND\n (SELECT count(*) > 1\n FROM osm_poi AS p\n WHERE p.tags @> hstore('amenity','school')\n AND ST_DWithin(b.way,p.way,1000)\n )\n-- Total query runtime: 4308488 ms. 66345 rows retrieved.\n\nHere's the query plan (from EXPLAIN):\n\"Index Scan using osm_polygon_tags_idx on osm_polygon\n(cost=0.00..406812.81 rows=188 width=901)\"\n\" Index Cond: (tags @> '\"building\"=>\"yes\"'::hstore)\"\n\" Filter: ((SubPlan 1) AND (SubPlan 2))\"\n\" SubPlan 1\"\n\" -> Aggregate (cost=269.19..269.20 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on osm_poi p (cost=7.76..269.19\nrows=1 width=0)\"\n\" Recheck Cond: (way && st_expand(osm_polygon.way,\n1000::double precision))\"\n\" Filter: ((tags @> '\"amenity\"=>\"pharmacy\"'::hstore)\nAND (osm_polygon.way && st_expand(way, 1000::double precision)) AND\n_st_dwithin(osm_polygon.way, way, 1000::double precision))\"\n\" -> Bitmap Index Scan on osm_poi_way_idx\n(cost=0.00..7.76 rows=62 width=0)\"\n\" Index Cond: (way && st_expand(osm_polygon.way,\n1000::double precision))\"\n\" SubPlan 2\"\n\" -> Aggregate (cost=269.19..269.20 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on osm_poi p (cost=7.76..269.19\nrows=1 width=0)\"\n\" Recheck Cond: (way && st_expand(osm_polygon.way,\n1000::double precision))\"\n\" Filter: ((tags @> '\"amenity\"=>\"school\"'::hstore) AND\n(osm_polygon.way && st_expand(way, 1000::double precision)) AND\n_st_dwithin(osm_polygon.way, way, 1000::double precision))\"\n\" -> Bitmap Index Scan on osm_poi_way_idx\n(cost=0.00..7.76 rows=62 width=0)\"\n\" Index Cond: (way && st_expand(osm_polygon.way,\n1000::double precision))\"\n\n***\n",
"msg_date": "Tue, 7 Aug 2012 14:01:30 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query: Select all buildings that have >1 pharmacies and >1\n\tschools within 1000m"
},
{
"msg_contents": "On 7 Srpen 2012, 14:01, Stefan Keller wrote:\n> Hi\n>\n> I have an interesting query to be optimized related to this one [1].\n>\n> The query definition is: Select all buildings that have more than 1\n> pharmacies and more than 1 schools within a radius of 1000m.\n>\n> The problem is that I think that this query is inherently O(n^2). In\n> fact the solution I propose below takes forever...\n\nWhat about plain INTERSECT? Something like\n\nSELECT osm_id FROM osm_poi AS p, osm_polygon b\n WHERE p.tags @> hstore('amenity','pharmacy')\n AND ST_DWithin(b.way,p.way,1000)\nINTERSECT\nSELECT osm_id FROM osm_poi AS p, osm_polygon b\n WHERE p.tags @> hstore('amenity','school')\n AND ST_DWithin(b.way,p.way,1000)\n\nOr something like that. But maybe it's a complete nonsense ...\n\nTomas\n\n",
"msg_date": "Tue, 7 Aug 2012 14:16:43 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: Select all buildings that have >1\n\tpharmacies and >1 schools within 1000m"
},
{
"msg_contents": "Your proposal lacks the requirement that it's the same building from\nwhere pharmacies and schools are reachable.\nBut I think about.\n\nYours, S.\n\n2012/8/7 Tomas Vondra <[email protected]>:\n> On 7 Srpen 2012, 14:01, Stefan Keller wrote:\n>> Hi\n>>\n>> I have an interesting query to be optimized related to this one [1].\n>>\n>> The query definition is: Select all buildings that have more than 1\n>> pharmacies and more than 1 schools within a radius of 1000m.\n>>\n>> The problem is that I think that this query is inherently O(n^2). In\n>> fact the solution I propose below takes forever...\n>\n> What about plain INTERSECT? Something like\n>\n> SELECT osm_id FROM osm_poi AS p, osm_polygon b\n> WHERE p.tags @> hstore('amenity','pharmacy')\n> AND ST_DWithin(b.way,p.way,1000)\n> INTERSECT\n> SELECT osm_id FROM osm_poi AS p, osm_polygon b\n> WHERE p.tags @> hstore('amenity','school')\n> AND ST_DWithin(b.way,p.way,1000)\n>\n> Or something like that. But maybe it's a complete nonsense ...\n>\n> Tomas\n>\n",
"msg_date": "Tue, 7 Aug 2012 14:22:53 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query: Select all buildings that have >1\n\tpharmacies and >1 schools within 1000m"
},
{
"msg_contents": "On 7 Srpen 2012, 14:22, Stefan Keller wrote:\n> Your proposal lacks the requirement that it's the same building from\n> where pharmacies and schools are reachable.\n> But I think about.\n\nI don't know the dataset so I've expected the osm_id to identify the\nbuilding - then the intersect should work as AND for the conditions.\n\nAnd I see I've forgot to include the 'is building' condition, so it should\nbe like this:\n\n SELECT b.osm_id FROM osm_poi AS p, osm_polygon b\n WHERE p.tags @> hstore('amenity','pharmacy')\n AND b.tags @> hstore('building','yes')\n AND ST_DWithin(b.way,p.way,1000)\n INTERSECT\n SELECT b.osm_id FROM osm_poi AS p, osm_polygon b\n WHERE p.tags @> hstore('amenity','school')\n AND b.tags @> hstore('building','yes')\n AND ST_DWithin(b.way,p.way,1000)\n\nTomas\n\n",
"msg_date": "Tue, 7 Aug 2012 15:56:46 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: Select all buildings that have >1\n\tpharmacies and >1 schools within 1000m"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 5:01 AM, Stefan Keller <[email protected]> wrote:\n\n> Hi\n>\n> I have an interesting query to be optimized related to this one [1].\n>\n> The query definition is: Select all buildings that have more than 1\n> pharmacies and more than 1 schools within a radius of 1000m.\n>\n> The problem is that I think that this query is inherently O(n^2). In\n> fact the solution I propose below takes forever...\n>\n\nMaybe you could get rid of the O(n^2) aspect like this:\n\n Select all buildings that have more than 1\n pharmacies and more than 1 schools within a radius of 1000m\n from\n (Select all buildings that have more than four (pharmacy or school)\n within a radius of 1000m)\n\nThe inner select should be fast -- you could make it fast by creating a new\nproperty like \"building of interest\" that was \"pharmacy or school\" and\nbuild an index on the \"building of interest\" property.\n\nThe inner query would reduce your sample set to a much smaller set of\nbuildings, and presumably the outer query could handle that pretty quickly.\n\nCraig James\n\n\n>\n> My questions:\n>\n> 1. Any comments about the nature of this problem?\n>\n> 2. ... on how to speed it up ?\n>\n> 3. In the original query [1] there's a count which contains a\n> subquery. According to my tests PostgreSQL does not allow this despite\n> the documentation which says \"count(expression)\".\n>\n> Remarks: I know that \"count(*)\" could be faster on PostgreSQL but\n> \"count(osm_id)\" does not change the query plan and this does not seem\n> to be the bottleneck here anyway.\n>\n> Yours, S.\n>\n> [1]\n> http://gis.stackexchange.com/questions/11445/selecting-pois-around-specific-buildings-using-postgis\n>\n>\n> Here's my query:\n>\n> -- Select all buildings that have >1 pharmacies and >1 schools within\n> 1000m:\n> SELECT osm_id AS building_id\n> FROM\n> (SELECT osm_id, way\n> FROM osm_polygon\n> WHERE tags @> hstore('building','yes')\n> ) AS b\n> WHERE\n> (SELECT count(*) > 1\n> FROM osm_poi AS p\n> WHERE p.tags @> hstore('amenity','pharmacy')\n> AND ST_DWithin(b.way,p.way,1000)\n> )\n> AND\n> (SELECT count(*) > 1\n> FROM osm_poi AS p\n> WHERE p.tags @> hstore('amenity','school')\n> AND ST_DWithin(b.way,p.way,1000)\n> )\n> -- Total query runtime: 4308488 ms. 66345 rows retrieved.\n>\n> Here's the query plan (from EXPLAIN):\n> \"Index Scan using osm_polygon_tags_idx on osm_polygon\n> (cost=0.00..406812.81 rows=188 width=901)\"\n> \" Index Cond: (tags @> '\"building\"=>\"yes\"'::hstore)\"\n> \" Filter: ((SubPlan 1) AND (SubPlan 2))\"\n> \" SubPlan 1\"\n> \" -> Aggregate (cost=269.19..269.20 rows=1 width=0)\"\n> \" -> Bitmap Heap Scan on osm_poi p (cost=7.76..269.19\n> rows=1 width=0)\"\n> \" Recheck Cond: (way && st_expand(osm_polygon.way,\n> 1000::double precision))\"\n> \" Filter: ((tags @> '\"amenity\"=>\"pharmacy\"'::hstore)\n> AND (osm_polygon.way && st_expand(way, 1000::double precision)) AND\n> _st_dwithin(osm_polygon.way, way, 1000::double precision))\"\n> \" -> Bitmap Index Scan on osm_poi_way_idx\n> (cost=0.00..7.76 rows=62 width=0)\"\n> \" Index Cond: (way && st_expand(osm_polygon.way,\n> 1000::double precision))\"\n> \" SubPlan 2\"\n> \" -> Aggregate (cost=269.19..269.20 rows=1 width=0)\"\n> \" -> Bitmap Heap Scan on osm_poi p (cost=7.76..269.19\n> rows=1 width=0)\"\n> \" Recheck Cond: (way && st_expand(osm_polygon.way,\n> 1000::double precision))\"\n> \" Filter: ((tags @> '\"amenity\"=>\"school\"'::hstore) AND\n> (osm_polygon.way && st_expand(way, 1000::double precision)) AND\n> _st_dwithin(osm_polygon.way, way, 1000::double precision))\"\n> \" -> Bitmap Index Scan on osm_poi_way_idx\n> (cost=0.00..7.76 rows=62 width=0)\"\n> \" Index Cond: (way && st_expand(osm_polygon.way,\n> 1000::double precision))\"\n>\n> ***\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Tue, Aug 7, 2012 at 5:01 AM, Stefan Keller <[email protected]> wrote:\nHi\n\nI have an interesting query to be optimized related to this one [1].\n\nThe query definition is: Select all buildings that have more than 1\npharmacies and more than 1 schools within a radius of 1000m.\n\nThe problem is that I think that this query is inherently O(n^2). In\nfact the solution I propose below takes forever...Maybe you could get rid of the O(n^2) aspect like this: Select all buildings that have more than 1 \npharmacies and more than 1 schools within a radius of 1000m from (Select all buildings that have more than four (pharmacy or school) within a radius of 1000m)The inner select should be fast -- you could make it fast by creating a new property like \"building of interest\" that was \"pharmacy or school\" and build an index on the \"building of interest\" property.\nThe inner query would reduce your sample set to a much smaller set of buildings, and presumably the outer query could handle that pretty quickly.Craig James \n\nMy questions:\n\n1. Any comments about the nature of this problem?\n\n2. ... on how to speed it up ?\n\n3. In the original query [1] there's a count which contains a\nsubquery. According to my tests PostgreSQL does not allow this despite\nthe documentation which says \"count(expression)\".\n\nRemarks: I know that \"count(*)\" could be faster on PostgreSQL but\n\"count(osm_id)\" does not change the query plan and this does not seem\nto be the bottleneck here anyway.\n\nYours, S.\n\n[1] http://gis.stackexchange.com/questions/11445/selecting-pois-around-specific-buildings-using-postgis\n\n\nHere's my query:\n\n-- Select all buildings that have >1 pharmacies and >1 schools within 1000m:\nSELECT osm_id AS building_id\nFROM\n (SELECT osm_id, way\n FROM osm_polygon\n WHERE tags @> hstore('building','yes')\n ) AS b\nWHERE\n (SELECT count(*) > 1\n FROM osm_poi AS p\n WHERE p.tags @> hstore('amenity','pharmacy')\n AND ST_DWithin(b.way,p.way,1000)\n )\n AND\n (SELECT count(*) > 1\n FROM osm_poi AS p\n WHERE p.tags @> hstore('amenity','school')\n AND ST_DWithin(b.way,p.way,1000)\n )\n-- Total query runtime: 4308488 ms. 66345 rows retrieved.\n\nHere's the query plan (from EXPLAIN):\n\"Index Scan using osm_polygon_tags_idx on osm_polygon\n(cost=0.00..406812.81 rows=188 width=901)\"\n\" Index Cond: (tags @> '\"building\"=>\"yes\"'::hstore)\"\n\" Filter: ((SubPlan 1) AND (SubPlan 2))\"\n\" SubPlan 1\"\n\" -> Aggregate (cost=269.19..269.20 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on osm_poi p (cost=7.76..269.19\nrows=1 width=0)\"\n\" Recheck Cond: (way && st_expand(osm_polygon.way,\n1000::double precision))\"\n\" Filter: ((tags @> '\"amenity\"=>\"pharmacy\"'::hstore)\nAND (osm_polygon.way && st_expand(way, 1000::double precision)) AND\n_st_dwithin(osm_polygon.way, way, 1000::double precision))\"\n\" -> Bitmap Index Scan on osm_poi_way_idx\n(cost=0.00..7.76 rows=62 width=0)\"\n\" Index Cond: (way && st_expand(osm_polygon.way,\n1000::double precision))\"\n\" SubPlan 2\"\n\" -> Aggregate (cost=269.19..269.20 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on osm_poi p (cost=7.76..269.19\nrows=1 width=0)\"\n\" Recheck Cond: (way && st_expand(osm_polygon.way,\n1000::double precision))\"\n\" Filter: ((tags @> '\"amenity\"=>\"school\"'::hstore) AND\n(osm_polygon.way && st_expand(way, 1000::double precision)) AND\n_st_dwithin(osm_polygon.way, way, 1000::double precision))\"\n\" -> Bitmap Index Scan on osm_poi_way_idx\n(cost=0.00..7.76 rows=62 width=0)\"\n\" Index Cond: (way && st_expand(osm_polygon.way,\n1000::double precision))\"\n\n***\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 7 Aug 2012 07:37:42 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: Select all buildings that have >1\n\tpharmacies and >1 schools within 1000m"
},
{
"msg_contents": "Hi Craig\n\nClever proposal!\nI slightly tried to adapt it to the hstore involved.\nNow I'm having a weird problem that PG says that \"relation 'p' does not exist\".\nWhy does PG recognize table b in the subquery but not table p?\nAny ideas?\n\n-- Stefan\n\n\nSELECT b.way AS building_geometry\nFROM\n (SELECT way\n FROM osm_polygon\n WHERE tags @> hstore('building','yes')\n ) AS b,\n (SELECT way, tags->'amenity' as value\n FROM osm_poi\n WHERE tags ? 'amenity'\n ) AS p\nWHERE\n (SELECT count(*) > 1\n FROM p\n WHERE p.value = 'pharmacy'\n AND ST_DWithin(b.way,p.way,1000)\n )\n AND\n (SELECT count(*) > 1\n FROM p\n WHERE p.value = 'school'\n AND ST_DWithin(b.way,p.way,1000)\n )\n\nERROR: relation \"p\" does not exist\nLINE 14: FROM p\n\n\n2012/8/7 Craig James <[email protected]>:\n> On Tue, Aug 7, 2012 at 5:01 AM, Stefan Keller <[email protected]> wrote:\n>>\n>> Hi\n>>\n>> I have an interesting query to be optimized related to this one [1].\n>>\n>> The query definition is: Select all buildings that have more than 1\n>> pharmacies and more than 1 schools within a radius of 1000m.\n>>\n>> The problem is that I think that this query is inherently O(n^2). In\n>> fact the solution I propose below takes forever...\n>\n>\n> Maybe you could get rid of the O(n^2) aspect like this:\n>\n>\n> Select all buildings that have more than 1\n> pharmacies and more than 1 schools within a radius of 1000m\n> from\n> (Select all buildings that have more than four (pharmacy or school)\n> within a radius of 1000m)\n>\n> The inner select should be fast -- you could make it fast by creating a new\n> property like \"building of interest\" that was \"pharmacy or school\" and build\n> an index on the \"building of interest\" property.\n>\n> The inner query would reduce your sample set to a much smaller set of\n> buildings, and presumably the outer query could handle that pretty quickly.\n>\n> Craig James\n>\n>>\n>>\n>> My questions:\n>>\n>> 1. Any comments about the nature of this problem?\n>>\n>> 2. ... on how to speed it up ?\n>>\n>> 3. In the original query [1] there's a count which contains a\n>> subquery. According to my tests PostgreSQL does not allow this despite\n>> the documentation which says \"count(expression)\".\n>>\n>> Remarks: I know that \"count(*)\" could be faster on PostgreSQL but\n>> \"count(osm_id)\" does not change the query plan and this does not seem\n>> to be the bottleneck here anyway.\n>>\n>> Yours, S.\n>>\n>> [1]\n>> http://gis.stackexchange.com/questions/11445/selecting-pois-around-specific-buildings-using-postgis\n>>\n>>\n>> Here's my query:\n>>\n>> -- Select all buildings that have >1 pharmacies and >1 schools within\n>> 1000m:\n>> SELECT osm_id AS building_id\n>> FROM\n>> (SELECT osm_id, way\n>> FROM osm_polygon\n>> WHERE tags @> hstore('building','yes')\n>> ) AS b\n>> WHERE\n>> (SELECT count(*) > 1\n>> FROM osm_poi AS p\n>> WHERE p.tags @> hstore('amenity','pharmacy')\n>> AND ST_DWithin(b.way,p.way,1000)\n>> )\n>> AND\n>> (SELECT count(*) > 1\n>> FROM osm_poi AS p\n>> WHERE p.tags @> hstore('amenity','school')\n>> AND ST_DWithin(b.way,p.way,1000)\n>> )\n>> -- Total query runtime: 4308488 ms. 66345 rows retrieved.\n>>\n>> Here's the query plan (from EXPLAIN):\n>> \"Index Scan using osm_polygon_tags_idx on osm_polygon\n>> (cost=0.00..406812.81 rows=188 width=901)\"\n>> \" Index Cond: (tags @> '\"building\"=>\"yes\"'::hstore)\"\n>> \" Filter: ((SubPlan 1) AND (SubPlan 2))\"\n>> \" SubPlan 1\"\n>> \" -> Aggregate (cost=269.19..269.20 rows=1 width=0)\"\n>> \" -> Bitmap Heap Scan on osm_poi p (cost=7.76..269.19\n>> rows=1 width=0)\"\n>> \" Recheck Cond: (way && st_expand(osm_polygon.way,\n>> 1000::double precision))\"\n>> \" Filter: ((tags @> '\"amenity\"=>\"pharmacy\"'::hstore)\n>> AND (osm_polygon.way && st_expand(way, 1000::double precision)) AND\n>> _st_dwithin(osm_polygon.way, way, 1000::double precision))\"\n>> \" -> Bitmap Index Scan on osm_poi_way_idx\n>> (cost=0.00..7.76 rows=62 width=0)\"\n>> \" Index Cond: (way && st_expand(osm_polygon.way,\n>> 1000::double precision))\"\n>> \" SubPlan 2\"\n>> \" -> Aggregate (cost=269.19..269.20 rows=1 width=0)\"\n>> \" -> Bitmap Heap Scan on osm_poi p (cost=7.76..269.19\n>> rows=1 width=0)\"\n>> \" Recheck Cond: (way && st_expand(osm_polygon.way,\n>> 1000::double precision))\"\n>> \" Filter: ((tags @> '\"amenity\"=>\"school\"'::hstore) AND\n>> (osm_polygon.way && st_expand(way, 1000::double precision)) AND\n>> _st_dwithin(osm_polygon.way, way, 1000::double precision))\"\n>> \" -> Bitmap Index Scan on osm_poi_way_idx\n>> (cost=0.00..7.76 rows=62 width=0)\"\n>> \" Index Cond: (way && st_expand(osm_polygon.way,\n>> 1000::double precision))\"\n>>\n>> ***\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n",
"msg_date": "Wed, 8 Aug 2012 02:07:48 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query: Select all buildings that have >1\n\tpharmacies and >1 schools within 1000m"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller <[email protected]> wrote:\n> Hi Craig\n>\n> Clever proposal!\n> I slightly tried to adapt it to the hstore involved.\n> Now I'm having a weird problem that PG says that \"relation 'p' does not exist\".\n> Why does PG recognize table b in the subquery but not table p?\n> Any ideas?\n\nI don't think it does recognize b, either. It just fell over on p\nbefore it had a chance to fall over on b.\n\nI think you have to use WITH if you want to reference the same\nsubquery in multiple FROMs.\n\nAnother approach would be to add explicit conditions for there being\nat least 1 school and 1 pharmacy within distance. There can't be >1\nunless there is >=1, but the join possibilities for >=1 (i.e. \"where\nexists\" rather than \"where (select count(*)...)>1\" ) are much more\nattractive than the ones for >1.\n\nCheers,\n\nJeff\n",
"msg_date": "Tue, 7 Aug 2012 17:50:32 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: Select all buildings that have >1\n\tpharmacies and >1 schools within 1000m"
},
{
"msg_contents": "Hi\n\n2012/8/8 Jeff Janes <[email protected]>:\n> On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller <[email protected]> wrote:\n>> Hi Craig\n>>\n>> Clever proposal!\n>> I slightly tried to adapt it to the hstore involved.\n>> Now I'm having a weird problem that PG says that \"relation 'p' does not exist\".\n>> Why does PG recognize table b in the subquery but not table p?\n>> Any ideas?\n>\n> I don't think it does recognize b, either. It just fell over on p\n> before it had a chance to fall over on b.\n\nNo, the b get's recognized. See my original query.\nThat's a strange behaviour of the SQL parser which I can't understand.\n\n> I think you have to use WITH if you want to reference the same\n> subquery in multiple FROMs.\n\nI'll try that with CTE too.\n\n> Another approach would be to add explicit conditions for there being\n> at least 1 school and 1 pharmacy within distance. There can't be >1\n> unless there is >=1, but the join possibilities for >=1 (i.e. \"where\n> exists\" rather than \"where (select count(*)...)>1\" ) are much more\n> attractive than the ones for >1.\n>\n> Cheers,\n>\n> Jeff\n\nYou mean, first doing a select on existence and then apply the count\ncondition later?\n\nStefan\n",
"msg_date": "Thu, 9 Aug 2012 13:00:18 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query: Select all buildings that have >1\n\tpharmacies and >1 schools within 1000m"
},
{
"msg_contents": "On Thu, Aug 9, 2012 at 4:00 AM, Stefan Keller <[email protected]> wrote:\n> Hi\n>\n> 2012/8/8 Jeff Janes <[email protected]>:\n>> On Tue, Aug 7, 2012 at 5:07 PM, Stefan Keller <[email protected]> wrote:\n>>> Hi Craig\n>>>\n>>> Clever proposal!\n>>> I slightly tried to adapt it to the hstore involved.\n>>> Now I'm having a weird problem that PG says that \"relation 'p' does not exist\".\n>>> Why does PG recognize table b in the subquery but not table p?\n>>> Any ideas?\n>>\n>> I don't think it does recognize b, either. It just fell over on p\n>> before it had a chance to fall over on b.\n>\n> No, the b get's recognized. See my original query.\n> That's a strange behaviour of the SQL parser which I can't understand.\n\nOh, I see. You are referencing b only as the qualifier for a column\nname, while you are trying to reference p as a an entire query. I\ninitially misread it and thought you referencing both b and p in both\nways each.\n\n>\n>> I think you have to use WITH if you want to reference the same\n>> subquery in multiple FROMs.\n>\n> I'll try that with CTE too.\n>\n>> Another approach would be to add explicit conditions for there being\n>> at least 1 school and 1 pharmacy within distance. There can't be >1\n>> unless there is >=1, but the join possibilities for >=1 (i.e. \"where\n>> exists\" rather than \"where (select count(*)...)>1\" ) are much more\n>> attractive than the ones for >1.\n>>\n>> Cheers,\n>>\n>> Jeff\n>\n> You mean, first doing a select on existence and then apply the count\n> condition later?\n\nYes, exactly.\n\nOf course this won't help if most buildings do have at least one of\neach within distance, as then the prefilter is not very selective.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 9 Aug 2012 09:17:20 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: Select all buildings that have >1\n\tpharmacies and >1 schools within 1000m"
}
] |
[
{
"msg_contents": "I found this discussion from 2005 that says you can drop and restore a\ntrigger inside a transaction, but that doing so locks the whole table:\n\nhttp://archives.postgresql.org/pgsql-general/2005-01/msg01347.php\n> From: Jeff Davis\n>\n> It got me curious enough that I tested it, and apparently droping a\n> trigger locks the table. Any actions on that table must wait until the\n> transaction that drops the trigger finishes.\n>\n> So, technically my system works, but requires a rather nasty lock while\n> the transaction (the one that doesn't want the trigger to execute)\n> finishes.\n\nI have a process that copies customer data from one database to\nanother, and we know that the trigger has already done its work. The\ntrigger is thus redundant, but it slows the copy WAY down, so I wanted\nto drop/restore it inside a transaction.\n\nIs it still true that drop-trigger inside a transaction will lock the\nwhole table? We're using 8.4.\n\nThanks,\nCraig\n",
"msg_date": "Tue, 7 Aug 2012 11:48:23 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is drop/restore trigger transactional?"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 1:48 PM, Craig James <[email protected]> wrote:\n> I found this discussion from 2005 that says you can drop and restore a\n> trigger inside a transaction, but that doing so locks the whole table:\n>\n> http://archives.postgresql.org/pgsql-general/2005-01/msg01347.php\n>> From: Jeff Davis\n>>\n>> It got me curious enough that I tested it, and apparently droping a\n>> trigger locks the table. Any actions on that table must wait until the\n>> transaction that drops the trigger finishes.\n>>\n>> So, technically my system works, but requires a rather nasty lock while\n>> the transaction (the one that doesn't want the trigger to execute)\n>> finishes.\n>\n> I have a process that copies customer data from one database to\n> another, and we know that the trigger has already done its work. The\n> trigger is thus redundant, but it slows the copy WAY down, so I wanted\n> to drop/restore it inside a transaction.\n>\n> Is it still true that drop-trigger inside a transaction will lock the\n> whole table? We're using 8.4.\n\nabsolutely -- the database needs to guard against other writers to the\ntable doing inserts in the meantime. there's no concept in SQL of\n'enforce this trigger for all writers, except for me' nor should there\nbe.\n\none possible workaround is to hack your trigger function so that it\ndoesn't operate for particular roles. so your trigger might be:\n\nIF current_user = 'bulk_writer' THEN\n return new;\nEND IF;\n<expensive stuff>\n\nthen you can log in with the bulk_writer role when you want to bypass\nthe checks. if your triggers are RI triggers though, you're hosed.\n\nmerlin\n",
"msg_date": "Tue, 7 Aug 2012 15:15:19 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 2:15 PM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Aug 7, 2012 at 1:48 PM, Craig James <[email protected]> wrote:\n>> I found this discussion from 2005 that says you can drop and restore a\n>> trigger inside a transaction, but that doing so locks the whole table:\n>>\n>> http://archives.postgresql.org/pgsql-general/2005-01/msg01347.php\n>>> From: Jeff Davis\n>>>\n>>> It got me curious enough that I tested it, and apparently droping a\n>>> trigger locks the table. Any actions on that table must wait until the\n>>> transaction that drops the trigger finishes.\n>>>\n>>> So, technically my system works, but requires a rather nasty lock while\n>>> the transaction (the one that doesn't want the trigger to execute)\n>>> finishes.\n>>\n>> I have a process that copies customer data from one database to\n>> another, and we know that the trigger has already done its work. The\n>> trigger is thus redundant, but it slows the copy WAY down, so I wanted\n>> to drop/restore it inside a transaction.\n>>\n>> Is it still true that drop-trigger inside a transaction will lock the\n>> whole table? We're using 8.4.\n>\n> absolutely -- the database needs to guard against other writers to the\n> table doing inserts in the meantime. there's no concept in SQL of\n> 'enforce this trigger for all writers, except for me' nor should there\n> be.\n>\n> one possible workaround is to hack your trigger function so that it\n> doesn't operate for particular roles. so your trigger might be:\n>\n> IF current_user = 'bulk_writer' THEN\n> return new;\n> END IF;\n> <expensive stuff>\n>\n> then you can log in with the bulk_writer role when you want to bypass\n> the checks. if your triggers are RI triggers though, you're hosed.\n\nI'm willing to bet that even without doing anything, just invoking the\ntrigger will still cost a LOT more than the cost incurred with it just\nturned off.\n",
"msg_date": "Tue, 7 Aug 2012 14:39:58 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 1:15 PM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Aug 7, 2012 at 1:48 PM, Craig James <[email protected]> wrote:\n>> I found this discussion from 2005 that says you can drop and restore a\n>> trigger inside a transaction, but that doing so locks the whole table:\n>>\n>> http://archives.postgresql.org/pgsql-general/2005-01/msg01347.php\n>>> From: Jeff Davis\n>>>\n>>> It got me curious enough that I tested it, and apparently droping a\n>>> trigger locks the table. Any actions on that table must wait until the\n>>> transaction that drops the trigger finishes.\n>>>\n>>> So, technically my system works, but requires a rather nasty lock while\n>>> the transaction (the one that doesn't want the trigger to execute)\n>>> finishes.\n>>\n>> I have a process that copies customer data from one database to\n>> another, and we know that the trigger has already done its work. The\n>> trigger is thus redundant, but it slows the copy WAY down, so I wanted\n>> to drop/restore it inside a transaction.\n>>\n>> Is it still true that drop-trigger inside a transaction will lock the\n>> whole table? We're using 8.4.\n>\n> absolutely -- the database needs to guard against other writers to the\n> table doing inserts in the meantime.\n\nBut why must it? Why can't other writers simply obey the trigger,\nsince its removal has not yet been committed? You could have the\nanomaly that a longer-running later-committing transaction used the\nold trigger while a shorter-running earlier-committing transaction\nused the new one (which isn't really an anomaly if the old and new are\nidentical), but is that even barred if neither of them is in\nserializable mode? And since triggers can do pretty much anything\nthey want internally, there isn't much of a transactional guarantee\nwith them anyway.\n\n> there's no concept in SQL of\n> 'enforce this trigger for all writers, except for me' nor should there\n> be.\n\nWhy shouldn't there be, other than the bother of implementing and\ndocumenting it? Sometimes theory needs to compromise with reality.\nWhen we don't provide slightly dangerous ways to make those\ncompromises, people are forced to use very dangerous ways instead.\n\n>\n> one possible workaround is to hack your trigger function so that it\n> doesn't operate for particular roles. so your trigger might be:\n>\n> IF current_user = 'bulk_writer' THEN\n> return new;\n> END IF;\n> <expensive stuff>\n\nI don't know Craig's case, but often the most expensive of the\n\"expensive stuff\" is the bare fact of firing a trigger in the first\nplace.\n\ncheers,\n\nJeff\n",
"msg_date": "Tue, 7 Aug 2012 13:45:18 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 3:45 PM, Jeff Janes <[email protected]> wrote:\n>> absolutely -- the database needs to guard against other writers to the\n>> table doing inserts in the meantime.\n>\n> But why must it? Why can't other writers simply obey the trigger,\n> since its removal has not yet been committed? You could have the\n> anomaly that a longer-running later-committing transaction used the\n> old trigger while a shorter-running earlier-committing transaction\n> used the new one (which isn't really an anomaly if the old and new are\n> identical), but is that even barred if neither of them is in\n> serializable mode? And since triggers can do pretty much anything\n> they want internally, there isn't much of a transactional guarantee\n> with them anyway.\n\nTriggers give a 100% transactional guarantee, period. Yes, you can do\nthings in them that violate MVCC, like make dblink calls, but you can\ndo that from any SQL statement; they are no less transactionally\nguaranteed than regular SQL. As to your wider point, you could in\ntheory interleave other work with adjustment of triggers although it\nseems pretty complicated and weird. Also RI triggers (the most\nimportant case) would need special handling since (like check\nconstraints) they are supposed to apply to the table as a whole, not\nrecords inserted since trigger creation. Also serializable would be\nright out as you noted.\n\n>> there's no concept in SQL of\n>> 'enforce this trigger for all writers, except for me' nor should there\n>> be.\n>\n> Why shouldn't there be, other than the bother of implementing and\n> documenting it? Sometimes theory needs to compromise with reality.\n> When we don't provide slightly dangerous ways to make those\n> compromises, people are forced to use very dangerous ways instead.\n>\n>>\n>> one possible workaround is to hack your trigger function so that it\n>> doesn't operate for particular roles. so your trigger might be:\n>>\n>> IF current_user = 'bulk_writer' THEN\n>> return new;\n>> END IF;\n>> <expensive stuff>\n>\n> I don't know Craig's case, but often the most expensive of the\n> \"expensive stuff\" is the bare fact of firing a trigger in the first\n> place.\n\nThat's highly debatable. a function call is somewhat expensive but is\na fixed cpu cost. RI triggers or complicated queries can really get\nexpensive, especially with large tables.\n\nmerlin\n",
"msg_date": "Tue, 7 Aug 2012 16:21:36 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 1:45 PM, Jeff Janes <[email protected]> wrote:\n> On Tue, Aug 7, 2012 at 1:15 PM, Merlin Moncure <[email protected]> wrote:\n>> On Tue, Aug 7, 2012 at 1:48 PM, Craig James <[email protected]> wrote:\n>>> I found this discussion from 2005 that says you can drop and restore a\n>>> trigger inside a transaction, but that doing so locks the whole table:\n>>>\n>>> http://archives.postgresql.org/pgsql-general/2005-01/msg01347.php\n>>>> From: Jeff Davis\n>>>>\n>>>> It got me curious enough that I tested it, and apparently droping a\n>>>> trigger locks the table. Any actions on that table must wait until the\n>>>> transaction that drops the trigger finishes.\n>>>>\n>>>> So, technically my system works, but requires a rather nasty lock while\n>>>> the transaction (the one that doesn't want the trigger to execute)\n>>>> finishes.\n>>>\n>>> I have a process that copies customer data from one database to\n>>> another, and we know that the trigger has already done its work. The\n>>> trigger is thus redundant, but it slows the copy WAY down, so I wanted\n>>> to drop/restore it inside a transaction.\n>>>\n>>> Is it still true that drop-trigger inside a transaction will lock the\n>>> whole table? We're using 8.4.\n>>\n>> absolutely -- the database needs to guard against other writers to the\n>> table doing inserts in the meantime.\n>\n> But why must it? Why can't other writers simply obey the trigger,\n> since its removal has not yet been committed?\n>> there's no concept in SQL of\n>> 'enforce this trigger for all writers, except for me' nor should there\n>> be.\n>\n> Why shouldn't there be, other than the bother of implementing and\n> documenting it? Sometimes theory needs to compromise with reality.\n> When we don't provide slightly dangerous ways to make those\n> compromises, people are forced to use very dangerous ways instead.\n>\n>>\n>> one possible workaround is to hack your trigger function so that it\n>> doesn't operate for particular roles. so your trigger might be:\n>>\n>> IF current_user = 'bulk_writer' THEN\n>> return new;\n>> END IF;\n>> <expensive stuff>\n>\n> I don't know Craig's case, but often the most expensive of the\n> \"expensive stuff\" is the bare fact of firing a trigger in the first\n> place.\n\nMy use case is pretty simple: Copy some already-validated user data\nfrom one schema to another. Since the trigger has already been\napplied, we're guaranteed that the data is already in the form we\nwant.\n\nFor your amusement: The trigger ensures that you can't buy illegal\ndrugs, explosives, weapons of war, corrosives and other dangerous or\nillegal chemical compounds. It executes a query against known\ncompounds from the DEA, Homeland Security, Department of\nTransportation and several other lists. Then calls a series of\nfunctions that implement \"rules\" to find illegal or dangerous\ncompounds that aren't on anyone's list. Some examples: \"cocaine\nderivatives\" for obvious reasons; \"two or more nitro groups on a small\nmolecule\" to find chemicals that might explode; and \"Metal-hydrogen\nbond\" to find things that will catch fire if exposed to air.\n\nThis is implemented in the database to esure that no matter how badly\na programmer screws up an app, you still can't get these chemical\ncompounds into an order. The chemicals need to be in our database for\ninformational purposes, but we don't want law enforcement knocking on\nour door.\n\nObviously this is a very expensive trigger, but one that we can drop\nin a very specific circumstance. But we NEVER want to drop it for\neveryone. It seems like a very reasonable use-case to me.\n\nCraig James\n",
"msg_date": "Tue, 7 Aug 2012 14:39:58 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 2:39 PM, Craig James <[email protected]> wrote:\n\n>\n> Obviously this is a very expensive trigger, but one that we can drop\n> in a very specific circumstance. But we NEVER want to drop it for\n> everyone. It seems like a very reasonable use-case to me.\n>\n>\nSounds like you should try doing the work inside the trigger conditionally\nand see if that improves performance enough, since you aren't likely to get\nanything that better suits your needs without patching postgres.\n\nOn Tue, Aug 7, 2012 at 2:39 PM, Craig James <[email protected]> wrote:\n\nObviously this is a very expensive trigger, but one that we can drop\nin a very specific circumstance. But we NEVER want to drop it for\neveryone. It seems like a very reasonable use-case to me.\nSounds like you should try doing the work inside the trigger conditionally and see if that improves performance enough, since you aren't likely to get anything that better suits your needs without patching postgres.",
"msg_date": "Tue, 7 Aug 2012 15:01:28 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 4:39 PM, Craig James <[email protected]> wrote:\n> On Tue, Aug 7, 2012 at 1:45 PM, Jeff Janes <[email protected]> wrote:\n>> On Tue, Aug 7, 2012 at 1:15 PM, Merlin Moncure <[email protected]> wrote:\n>>> On Tue, Aug 7, 2012 at 1:48 PM, Craig James <[email protected]> wrote:\n>>>> I found this discussion from 2005 that says you can drop and restore a\n>>>> trigger inside a transaction, but that doing so locks the whole table:\n>>>>\n>>>> http://archives.postgresql.org/pgsql-general/2005-01/msg01347.php\n>>>>> From: Jeff Davis\n>>>>>\n>>>>> It got me curious enough that I tested it, and apparently droping a\n>>>>> trigger locks the table. Any actions on that table must wait until the\n>>>>> transaction that drops the trigger finishes.\n>>>>>\n>>>>> So, technically my system works, but requires a rather nasty lock while\n>>>>> the transaction (the one that doesn't want the trigger to execute)\n>>>>> finishes.\n>>>>\n>>>> I have a process that copies customer data from one database to\n>>>> another, and we know that the trigger has already done its work. The\n>>>> trigger is thus redundant, but it slows the copy WAY down, so I wanted\n>>>> to drop/restore it inside a transaction.\n>>>>\n>>>> Is it still true that drop-trigger inside a transaction will lock the\n>>>> whole table? We're using 8.4.\n>>>\n>>> absolutely -- the database needs to guard against other writers to the\n>>> table doing inserts in the meantime.\n>>\n>> But why must it? Why can't other writers simply obey the trigger,\n>> since its removal has not yet been committed?\n>>> there's no concept in SQL of\n>>> 'enforce this trigger for all writers, except for me' nor should there\n>>> be.\n>>\n>> Why shouldn't there be, other than the bother of implementing and\n>> documenting it? Sometimes theory needs to compromise with reality.\n>> When we don't provide slightly dangerous ways to make those\n>> compromises, people are forced to use very dangerous ways instead.\n>>\n>>>\n>>> one possible workaround is to hack your trigger function so that it\n>>> doesn't operate for particular roles. so your trigger might be:\n>>>\n>>> IF current_user = 'bulk_writer' THEN\n>>> return new;\n>>> END IF;\n>>> <expensive stuff>\n>>\n>> I don't know Craig's case, but often the most expensive of the\n>> \"expensive stuff\" is the bare fact of firing a trigger in the first\n>> place.\n>\n> My use case is pretty simple: Copy some already-validated user data\n> from one schema to another. Since the trigger has already been\n> applied, we're guaranteed that the data is already in the form we\n> want.\n>\n> For your amusement: The trigger ensures that you can't buy illegal\n> drugs, explosives, weapons of war, corrosives and other dangerous or\n> illegal chemical compounds. It executes a query against known\n> compounds from the DEA, Homeland Security, Department of\n> Transportation and several other lists. Then calls a series of\n> functions that implement \"rules\" to find illegal or dangerous\n> compounds that aren't on anyone's list. Some examples: \"cocaine\n> derivatives\" for obvious reasons; \"two or more nitro groups on a small\n> molecule\" to find chemicals that might explode; and \"Metal-hydrogen\n> bond\" to find things that will catch fire if exposed to air.\n>\n> This is implemented in the database to esure that no matter how badly\n> a programmer screws up an app, you still can't get these chemical\n> compounds into an order. The chemicals need to be in our database for\n> informational purposes, but we don't want law enforcement knocking on\n> our door.\n>\n> Obviously this is a very expensive trigger, but one that we can drop\n> in a very specific circumstance. But we NEVER want to drop it for\n> everyone. It seems like a very reasonable use-case to me.\n\nwell, there you go: create a role that is excepted from having to run\nthrough those checks and take appropriate precautions (password,\npg_hba.conf etc) so that only people/things that are supposed to\nbypass the checks can do so. then the trigger can look for the role\nand punt.\n\nmerlin\n",
"msg_date": "Tue, 7 Aug 2012 17:02:31 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 2:39 PM, Craig James <[email protected]> wrote:\n> On Tue, Aug 7, 2012 at 1:45 PM, Jeff Janes <[email protected]> wrote:\n>> On Tue, Aug 7, 2012 at 1:15 PM, Merlin Moncure <[email protected]> wrote:\n>>>\n>>> IF current_user = 'bulk_writer' THEN\n>>> return new;\n>>> END IF;\n>>> <expensive stuff>\n>>\n>> I don't know Craig's case, but often the most expensive of the\n>> \"expensive stuff\" is the bare fact of firing a trigger in the first\n>> place.\n>\n> My use case is pretty simple: Copy some already-validated user data\n> from one schema to another. Since the trigger has already been\n> applied, we're guaranteed that the data is already in the form we\n> want.\n>\n> For your amusement:\n\nThanks. That was probably more amusing to me in particular than to most\npgsql hackers, as I think I've been a victim of your trigger.\n\n\n...\n>\n> Obviously this is a very expensive trigger, but one that we can drop\n> in a very specific circumstance. But we NEVER want to drop it for\n> everyone. It seems like a very reasonable use-case to me.\n\nAnd since the query is absolutely expensive, not just expensive\nrelative to a no-op, then Merlin's suggestion seems entirely suitable\nfor your use-case.\n\nCheers,\n\nJeff\n",
"msg_date": "Tue, 7 Aug 2012 15:22:55 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 3:22 PM, Jeff Janes <[email protected]> wrote:\n> On Tue, Aug 7, 2012 at 2:39 PM, Craig James <[email protected]> wrote:\n>> On Tue, Aug 7, 2012 at 1:45 PM, Jeff Janes <[email protected]> wrote:\n>>> On Tue, Aug 7, 2012 at 1:15 PM, Merlin Moncure <[email protected]> wrote:\n>>>>\n>>>> IF current_user = 'bulk_writer' THEN\n>>>> return new;\n>>>> END IF;\n>>>> <expensive stuff>\n>>>\n>>> I don't know Craig's case, but often the most expensive of the\n>>> \"expensive stuff\" is the bare fact of firing a trigger in the first\n>>> place.\n>>\n>> My use case is pretty simple: Copy some already-validated user data\n>> from one schema to another. Since the trigger has already been\n>> applied, we're guaranteed that the data is already in the form we\n>> want.\n>>\n>> For your amusement:\n>\n> Thanks. That was probably more amusing to me in particular than to most\n> pgsql hackers, as I think I've been a victim of your trigger.\n>\n>\n> ...\n>>\n>> Obviously this is a very expensive trigger, but one that we can drop\n>> in a very specific circumstance. But we NEVER want to drop it for\n>> everyone. It seems like a very reasonable use-case to me.\n>\n> And since the query is absolutely expensive, not just expensive\n> relative to a no-op, then Merlin's suggestion seems entirely suitable\n> for your use-case.\n\nThanks for the ideas. I think I have something to work with.\n\nCraig James\n",
"msg_date": "Tue, 7 Aug 2012 15:46:53 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "On 08/08/2012 04:15 AM, Merlin Moncure wrote:\n> IF current_user = 'bulk_writer' THEN\n> return new;\n> END IF;\n> <expensive stuff>\n... or re-create the trigger with a `WHEN` clause (only available in \nnewer Pg versions, see CREATE TRIGGER) that excludes the migrated \ncustomer ID. You'd have to do it in a new tx to avoid locking the table \nfor ages though.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 08 Aug 2012 08:29:02 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "On Tue, Aug 7, 2012 at 5:29 PM, Craig Ringer <[email protected]> wrote:\n> On 08/08/2012 04:15 AM, Merlin Moncure wrote:\n>>\n>> IF current_user = 'bulk_writer' THEN\n>> return new;\n>> END IF;\n>> <expensive stuff>\n>\n> ... or re-create the trigger with a `WHEN` clause (only available in newer\n> Pg versions, see CREATE TRIGGER) that excludes the migrated customer ID.\n> You'd have to do it in a new tx to avoid locking the table for ages though.\n\nyeah --- and, locking aside, I'd advise you not to do that anyways:\ntry and keep one block of code that enforces all the rules properly.\nalso, good deployment practices (especially in cases of security\nsensitive environments) should have good firewalls between production\nservices and developer introduced code.\n\nmerlin\n",
"msg_date": "Tue, 7 Aug 2012 17:42:17 -0700",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is drop/restore trigger transactional?"
},
{
"msg_contents": "Creating an inherit table without a trigger would be a good idea? Like a\nkind of partitioning, but simpler.\n\nCheers,\nMatheus de Oliveira\n\nCreating an inherit table without a trigger would be a good idea? Like a kind of partitioning, but simpler.\nCheers,\nMatheus de Oliveira",
"msg_date": "Sat, 11 Aug 2012 11:16:02 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is drop/restore trigger transactional?"
}
] |
[
{
"msg_contents": "On 08/03/2012 05:14 PM, [email protected] wrote:\n\n > It is read-only table so every integer column have an index.\n\nFirst tip: Define the table without the indexes. INSERT your data, and \nonly after it is inserted create your indexes.\n\nSimilarly, if you're making huge changes to the table you should \nconsider dropping the indexes, making the changes, and re-creating the \nindexes. You might not have to drop the indexes if you aren't changing \nindexed fields, since HOT might save you, but it depends a lot on the \nspecifics of the table's on-disk layout etc.\n\n> The server is: 12 GB RAM, 1,5 TB SATA, 4 CORES. All server for postgres.\n> There are many more tables in this database so RAM do not cover all \n> database.\n\nOK, in that case more info on the disk subsystem is generally helpful. \nDisk spin speed, type? RAID configuration if any? eg:\n\n 4 x 750GB 7200RPM Western Digital Black SATA 3 HDDs in RAID 10 using \nthe Linux 'md' raid driver\n\nor\n\n 2 x 1.5TB 7200RPM \"Enterprise/near-line\" SATA3 HDDs in RAID 1 using a \nDell PARC xxxx controller with BBU in write-back cache mode.\n\n... though if you're only bulk-inserting the BBU doesn't matter much.\n> |\n> | I wonder what option would be better in performance point of view.\n\nI would advise you to test on a subset of your data. Try loading the \nsame 50,000 records into different databases, one with each structure. \nMeasure how long the load takes for each design, and how long the \nqueries you need to run take to execute. Repeat the process with 500,000 \nrecords and see if one design slows down more than the other design \ndoes. Etc.\n\n> I need to make a good decision because import of this data will take \n> me a 20 days.\n\nFor the sheer size of data you have you might want to think about using \npg_bulkload. If you can't or don't want to do that, then at least use \nCOPY to load big batches of your data.\n\n--\nCraig Ringer\n\n\n\n\n\n\nOn 08/03/2012 05:14 PM,\n [email protected] wrote:\n\n > It is read-only table so every integer column have an index.\n\n First tip: Define the table without the indexes. INSERT your data,\n and only after it is inserted create your indexes.\n\n Similarly, if you're making huge changes to the table you should\n consider dropping the indexes, making the changes, and re-creating\n the indexes. You might not have to drop the indexes if you aren't\n changing indexed fields, since HOT might save you, but it depends\n a lot on the specifics of the table's on-disk layout etc.\n\n\n\n\n The server is: 12 GB RAM, 1,5 TB SATA, 4 CORES. All server for\n postgres.\n There are many more tables in this database so RAM do not cover\n all database.\n\n\n OK, in that case more info on the disk subsystem is generally\n helpful. Disk spin speed, type? RAID configuration if any? eg:\n\n 4 x 750GB 7200RPM Western Digital Black SATA 3 HDDs in RAID 10\n using the Linux 'md' raid driver\n\n or\n\n 2 x 1.5TB 7200RPM \"Enterprise/near-line\" SATA3 HDDs in RAID 1\n using a Dell PARC xxxx controller with BBU in write-back cache mode.\n\n ... though if you're only bulk-inserting the BBU doesn't matter\n much.\n\n\n\n I wonder what option would be better in performance point of view.\n \n\n\n I would advise you to test on a subset of your data. Try loading the\n same 50,000 records into different databases, one with each\n structure. Measure how long the load takes for each design, and how\n long the queries you need to run take to execute. Repeat the process\n with 500,000 records and see if one design slows down more than the\n other design does. Etc.\n\n\n I need to make a good decision because import of this data will\n take me a 20 days.\n\n\n For the sheer size of data you have you might want to think about\n using pg_bulkload. If you can't or don't want to do that, then at\n least use COPY to load big batches of your data. \n\n --\n Craig Ringer",
"msg_date": "Wed, 08 Aug 2012 14:33:02 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql - performance of using array in big database"
}
] |
[
{
"msg_contents": "Hi,\n\nOn 3 August 2012 19:14, <[email protected]> wrote:\n> I want to add to table \"Item\" a column \"a_elements\" (array type of big\n> integers) Every record would have not more than 50-60 elements in this\n> column.\n> After that i would create index GIN on this column and typical select should\n> look like this:\n> select*from item where......and5<@ a_elements;\n\nI would use this.\n\n> I have also second, more classical, option.\n> Do not add column a_elements to table item but create table elements with\n> two columns:\n>\n> id_item\n> id_element\n>\n> This table would have around 200 mln records.\n> I am able to do partitioning on this tables so number of records would\n> reduce to 20 mln in table elements and 500 K in table item.\n\nI do not understand how you can 'reduce to 20 mln'. Do you mean per partition?\n\n> The second option select looks like this:\n> select item.*\n> from item\n> leftjoin elements on(item.id_item=elements.id_item)\n> where....\n> and5= elements.id_element\n> I wonder what option would be better in performance point of view. Is\n> postgres able to use many different indexes with index GIN (option 1) in a\n> single query ?\n\nAssuming that you partition your tables using id_item. Postgres is not\ngood with partitions if joins are used. Let's have a query:\nselect .. from item\nleft join elements on (item.id_item=elements.id_item)\nwhere id_item = 2\n\nneeds to scan all partitions in 'elements' table because planner is\nnot smart enough to push where condition to join clause i.e. rewrite\nquery like this (8.4, haven't checked in 9.x releases):\n\nselect .. from item\nleft join elements on (item.id_item=elements.id_item and elements.id_item = 2)\nwhere id_item = 2\n\nIn order to use partitioning effectively all you queries need to have\nconstant expression (id_item = 2) in where/join on columns which are\nused for partitioning\n\n-- \nOndrej Ivanic\n([email protected])\n",
"msg_date": "Thu, 9 Aug 2012 10:18:10 +1000",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql - performance of using array in big database"
}
] |
[
{
"msg_contents": "http://i.imgur.com/sva4H.png\n\nIn the image above, please find the traffic we have seen from our main postgresql node to our cloud backup. \n\nA few notes on that image:\n\na) We're only interested in looking at the blue - outbound - traffic\nb) In general, this pipe is almost exclusively for WAL usage only. \n\nHopefully you can see how generalized WAL traffic increases, until it cuts off sharply, only to begin increasing again.\n\n\n\nhttp://i.imgur.com/2V8XY.png\n\nIn that image, you can see the traffic just after a cutoff - slowly ramping up again. You can also see our mysterious sawtooth pattern - spikes of WAL traffic that occur on the hour, quarter-hour, half-hour, and three-quarter-hour marks. We don't see any corresponding spikes in database activity at those times, so we're also mystified as to why these spikes are occurring. \n\n\nAny ideas on any of this? Why the sawteeth? Why the rise-then-drop? \n\nThanks so much!\n\n",
"msg_date": "Fri, 10 Aug 2012 13:35:16 -0400 (EDT)",
"msg_from": "Joseph Marlin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Increasing WAL usage followed by sudden drop"
},
{
"msg_contents": "\n> Any ideas on any of this? Why the sawteeth? Why the rise-then-drop? \n\nWell, my first thought on the sawteeth is that you have archive_timeout\nset to 15 minutes. No?\n\nAs for the gradual buildup over days, that most likely corresponds to\neither changes in application activity levels, or some kind of weekly\ndata purge cycle which shrinks your database every weekend. Since I\ndon't know anything about your admin or your application, that's a best\nguess.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Mon, 13 Aug 2012 18:09:23 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing WAL usage followed by sudden drop"
},
{
"msg_contents": "We are not doing anything to postgres that would cause the rise and drop. \nData base activity is pretty consistent. nor are we doing any kind of\npurge. This week the drop occurred after 6 days. We are thinking it must\nbe some kind of internal postgres activity but we can't track it down.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Increasing-WAL-usage-followed-by-sudden-drop-tp5719567p5720136.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Thu, 16 Aug 2012 09:31:51 -0700 (PDT)",
"msg_from": "delongboy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing WAL usage followed by sudden drop"
},
{
"msg_contents": "delongboy <[email protected]> wrote:\n> We are not doing anything to postgres that would cause the rise\n> and drop. Data base activity is pretty consistent. nor are we\n> doing any kind of purge. This week the drop occurred after 6\n> days. We are thinking it must be some kind of internal postgres\n> activity but we can't track it down.\n \nMaybe autovacuum freezing tuples (which is a WAL-logged operation)\nas tables periodically hit the autovacuum_freeze_max_age limit?\n \n-Kevin\n\n",
"msg_date": "Thu, 16 Aug 2012 11:39:40 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing WAL usage followed by sudden drop"
},
{
"msg_contents": "\n> We are not doing anything to postgres that would cause the rise and\n> drop.\n> Data base activity is pretty consistent. nor are we doing any kind\n> of\n> purge. This week the drop occurred after 6 days. We are thinking it\n> must\n> be some kind of internal postgres activity but we can't track it\n> down.\n\nWell, we certainly can't figure it out on this list by blind guessing ...\n\n",
"msg_date": "Fri, 17 Aug 2012 10:01:01 -0500 (CDT)",
"msg_from": "Joshua Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing WAL usage followed by sudden drop"
},
{
"msg_contents": "\nJosh Berkus wrote\n> \n>> We are not doing anything to postgres that would cause the rise and\n>> drop.\n>> Data base activity is pretty consistent. nor are we doing any kind\n>> of\n>> purge. This week the drop occurred after 6 days. We are thinking it\n>> must\n>> be some kind of internal postgres activity but we can't track it\n>> down.\n> \n> Well, we certainly can't figure it out on this list by blind guessing ...\n> \n\nHave to agree with you there. Unfortunately this is where we've ended up.\nWhat can we look at and/or show that would help us to narrow it down? Is\nthere anyway we can look into the wal file and see exactly what is being\nsent? I think this might actually give us the most insight.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Increasing-WAL-usage-followed-by-sudden-drop-tp5719567p5720250.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Fri, 17 Aug 2012 10:53:39 -0700 (PDT)",
"msg_from": "delongboy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing WAL usage followed by sudden drop"
},
{
"msg_contents": "On Fri, Aug 17, 2012 at 10:53 AM, delongboy <[email protected]> wrote:\n>\n> Josh Berkus wrote\n>>\n>>> We are not doing anything to postgres that would cause the rise and\n>>> drop.\n>>> Data base activity is pretty consistent. nor are we doing any kind\n>>> of\n>>> purge. This week the drop occurred after 6 days. We are thinking it\n>>> must\n>>> be some kind of internal postgres activity but we can't track it\n>>> down.\n>>\n>> Well, we certainly can't figure it out on this list by blind guessing ...\n>>\n>\n> Have to agree with you there. Unfortunately this is where we've ended up.\n> What can we look at and/or show that would help us to narrow it down? Is\n> there anyway we can look into the wal file and see exactly what is being\n> sent? I think this might actually give us the most insight.\n\nMaybe there is an easier way, but one thing would be to compile a test\nserver (of the same version as the production) with WAL_DEBUG defined\nin src/include/pg_config_manual.h, turn on the wal_debug guc, and\ncrank up trace_recovery_messages. Then replay the WAL log files from\nproduction through this test server and see what it logs. That\nrequires that you have\n\nEasier would to be turn on wal_debug and watch the server log as the\nWAL logs are generated, instead of recovered, but you probably don't\nwant to do that on production. So you would need a workload generator\nthat also exhibits the phenomenon of interest.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 17 Aug 2012 11:42:27 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing WAL usage followed by sudden drop"
},
{
"msg_contents": "\nJeff Janes wrote\n> \n> Maybe there is an easier way, but one thing would be to compile a test\n> server (of the same version as the production) with WAL_DEBUG defined\n> in src/include/pg_config_manual.h, turn on the wal_debug guc, and\n> crank up trace_recovery_messages. Then replay the WAL log files from\n> production through this test server and see what it logs. That\n> requires that you have\n> \n> Easier would to be turn on wal_debug and watch the server log as the\n> WAL logs are generated, instead of recovered, but you probably don't\n> want to do that on production. So you would need a workload generator\n> that also exhibits the phenomenon of interest.\n> \n\nThis sounds like it may help me see what is going on. However I am not\nfinding very much documentation as to how to do this exactly. What I have\nis it seems this has to be set and postgres needs to be re-compiled to\nenable it. Is this true? As that would not really be a viable option right\nnow. I am in position to set up a test server and run wal files through it. \nBut I am not sure how to accomplish this exactly? Is there somewhere you\nanyone could point me to find documentation on how to do this?\n\nThanks a lot for everyone's input so far.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Increasing-WAL-usage-followed-by-sudden-drop-tp5719567p5720492.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Mon, 20 Aug 2012 13:51:09 -0700 (PDT)",
"msg_from": "delongboy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing WAL usage followed by sudden drop"
},
{
"msg_contents": "On Mon, Aug 20, 2012 at 1:51 PM, delongboy <[email protected]> wrote:\n>\n> Jeff Janes wrote\n>>\n>> Maybe there is an easier way, but one thing would be to compile a test\n>> server (of the same version as the production) with WAL_DEBUG defined\n>> in src/include/pg_config_manual.h, turn on the wal_debug guc, and\n>> crank up trace_recovery_messages. Then replay the WAL log files from\n>> production through this test server and see what it logs. That\n>> requires that you have\n>>\n\nSorry, I got distracted during editing and didn't finish my sentence.\nIt requires you have a backup to apply the WAL to, and that you have\nthe entire history of WAL from the when the backup was started, until\nthe time when the interesting things are happening. That is rather\nannoying.\n\nIt seems like it shouldn't be all that hard to write a tool to parse\nWAL logs in a context-free basis (i.e. without the backup to start\napplying them to) and emit some kind of descriptions of the records\nand their sizes. But I don't know about such a tool already existing,\nand am not able to offer to create one. (And assuming one existed,\nkeeping it in sync with the main code would be a continuing problem)\n\n\n>\n> This sounds like it may help me see what is going on. However I am not\n> finding very much documentation as to how to do this exactly. What I have\n> is it seems this has to be set and postgres needs to be re-compiled to\n> enable it. Is this true?\n\nYes. The compilation only needs to happen on the test server,\nhowever, not the production server.\n\n> As that would not really be a viable option right\n> now. I am in position to set up a test server and run wal files through it.\n> But I am not sure how to accomplish this exactly? Is there somewhere you\n> anyone could point me to find documentation on how to do this?\n\ncreating the backup, accumulating the logs, and replaying them are described in:\n\nhttp://www.postgresql.org/docs/9.1/static/continuous-archiving.html\n\nOf course it does not explicitly describe the case of replaying\nthrough a toy system rather than another production system. It\nassumes you are replaying through a soon-to-become production server.\n\nI'm not sure how to address that part, other than to have you ask\nspecific questions.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Wed, 22 Aug 2012 17:28:45 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing WAL usage followed by sudden drop"
},
{
"msg_contents": "\nJeff Janes wrote\n> \n> It seems like it shouldn't be all that hard to write a tool to parse\n> WAL logs in a context-free basis (i.e. without the backup to start\n> applying them to) and emit some kind of descriptions of the records\n> and their sizes. But I don't know about such a tool already existing,\n> and am not able to offer to create one. (And assuming one existed,\n> keeping it in sync with the main code would be a continuing problem)\n> \n\nI appreciate your help Jeff. I have come across what would seem such a\ntool.\nIts called xlogdump\nI am working on getting it installed, having issues with libs I think at the\nmoment. I will let you know how it works out.\n\nThank you everybody for your input!\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Increasing-WAL-usage-followed-by-sudden-drop-tp5719567p5720953.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n",
"msg_date": "Thu, 23 Aug 2012 07:43:18 -0700 (PDT)",
"msg_from": "delongboy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing WAL usage followed by sudden drop"
}
] |
[
{
"msg_contents": "We have PostgreSQL 9.1 running on Centos 5 on two SSDs, one for indices and\none for data. The database is extremely active with reads and writes. We\nhave autovacuum enabled, but we didn't tweak it's aggressiveness. The\nproblem is that after some time the database grows even more than 100% on\nthe file system and most of the growth is because the indices are a few\ntimes bigger than they should be, and when this happens, the performance of\nthe DB drops.\n\nFor example, yesterday when I checked the database size on the production\nserver it was 30GB, and the restored dump of that database was only 17GB.\nThe most interesting thing is that the data wasn't bloated that much, but\nthe indices were. Some of them were a few times bigger than they should be.\nFor example an index on the production db is 440MB, while that same index\nafter dump/restore is 17MB, and there are many indices with that high\ndifference. We could fix the problem if we reindex the DB, but that makes\nour DB go offline and it's not possible to do in the production enviroment.\n\nIs there a way to make the autovacuum daemon more aggressive, since I'm not\nexactly sure how to do that in this case? Would that even help? Is there\nanother way to remove this index bloat?\n\nThanks in advance,\nStrahinja\n\nWe have PostgreSQL 9.1 running on Centos 5 on two SSDs, one for indices and one for data. The database is extremely active with reads and writes. We have autovacuum enabled, but we didn't tweak it's aggressiveness. The problem is that after some time the database grows even more than 100% on the file system and most of the growth is because the indices are a few times bigger than they should be, and when this happens, the performance of the DB drops.\nFor example, yesterday when I checked the database size on the production server it was 30GB, and the restored dump of that database was only 17GB. The most interesting thing is that the data wasn't bloated that much, but the indices were. Some of them were a few times bigger than they should be. For example an index on the production db is 440MB, while that same index after dump/restore is 17MB, and there are many indices with that high difference. We could fix the problem if we reindex the DB, but that makes our DB go offline and it's not possible to do in the production enviroment.\nIs there a way to make the autovacuum daemon more aggressive, since I'm not exactly sure how to do that in this case? Would that even help? Is there another way to remove this index bloat?\nThanks in advance,Strahinja",
"msg_date": "Sat, 11 Aug 2012 00:15:11 +0200",
"msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index Bloat Problem"
},
{
"msg_contents": "On Sat, Aug 11, 2012 at 12:15:11AM +0200, Strahinja Kustudić wrote:\n> Is there a way to make the autovacuum daemon more aggressive, since I'm not\n> exactly sure how to do that in this case? Would that even help? Is there\n> another way to remove this index bloat?\n\nhttp://www.depesz.com/index.php/2011/07/06/bloat-happens/\n\nBest regards,\n\ndepesz\n\n-- \nThe best thing about modern society is how easy it is to avoid contact with it.\n http://depesz.com/\n\n",
"msg_date": "Sat, 11 Aug 2012 11:30:42 +0200",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Bloat Problem"
},
{
"msg_contents": "On 11/08/12 10:15, Strahinja Kustudić wrote:\n> We have PostgreSQL 9.1 running on Centos 5 on two SSDs, one for indices and\n> one for data. The database is extremely active with reads and writes. We\n> have autovacuum enabled, but we didn't tweak it's aggressiveness. The\n> problem is that after some time the database grows even more than 100% on\n> the file system and most of the growth is because the indices are a few\n> times bigger than they should be, and when this happens, the performance of\n> the DB drops.\n>\n> For example, yesterday when I checked the database size on the production\n> server it was 30GB, and the restored dump of that database was only 17GB.\n> The most interesting thing is that the data wasn't bloated that much, but\n> the indices were. Some of them were a few times bigger than they should be.\n> For example an index on the production db is 440MB, while that same index\n> after dump/restore is 17MB, and there are many indices with that high\n> difference. We could fix the problem if we reindex the DB, but that makes\n> our DB go offline and it's not possible to do in the production enviroment.\n>\n> Is there a way to make the autovacuum daemon more aggressive, since I'm not\n> exactly sure how to do that in this case? Would that even help? Is there\n> another way to remove this index bloat?\n>\n>\n\nSome workloads can be difficult to tame. However I would try something \nlike this in postgresql.conf:\n\nautovacuum_naptime= 10s\nautovacuum_vacuum_scale_factor = 0.1\n\nand maybe set log_autovacuum_min_duration so you see what autovacuum is \ndoing.\n\nIf the above settings don't help, then you could maybe monitor growth \nand schedule regular REINDEXes on the tables concerned (at some suitably \nquiet time).\n\nRegards\n\nMark\n\n\n\n\n\n",
"msg_date": "Tue, 14 Aug 2012 10:52:35 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Bloat Problem"
},
{
"msg_contents": "On Fri, Aug 10, 2012 at 3:15 PM, Strahinja Kustudić\n<[email protected]> wrote:\n>\n> For example, yesterday when I checked the database size on the production\n> server it was 30GB, and the restored dump of that database was only 17GB.\n> The most interesting thing is that the data wasn't bloated that much, but\n> the indices were. Some of them were a few times bigger than they should be.\n> For example an index on the production db is 440MB, while that same index\n> after dump/restore is 17MB, and there are many indices with that high\n> difference.\n\nCould your pattern of deletions be leaving sparsely populated, but not\ncompletely empty, index pages; which your insertions will then never\nreuse because they never again insert values in that key range?\n\nCheers,\n\nJeff\n\n",
"msg_date": "Mon, 13 Aug 2012 21:14:42 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Bloat Problem"
},
{
"msg_contents": "Thanks for the help everyone and sorry for not replying sooner, I was on\na business trip.\n\n@Hubert pg_reorg looks really interesting and from the first read it looks\nto be a very good solution for maintenance, but for now I would rather try\nto slow down, or remove this bloat, so I have to do as less maintenance as\npossible.\n\n@Mark So basically I should decrease the autovacuum nap time from 60s to\n10s, reduce the scale factor from 0.2 to 0.1. log_autovacuum_min_duration is\nalready set to 0, which means everything is logged.\n\n@Jeff I'm not sure if I understand what you mean? I know that we never\nreuse key ranges. Could you be more clear, or give an example please.\n\nThanks in advance,\nStrahinja\n\n\n\nOn Tue, Aug 14, 2012 at 6:14 AM, Jeff Janes <[email protected]> wrote:\n\n> On Fri, Aug 10, 2012 at 3:15 PM, Strahinja Kustudić\n> <[email protected]> wrote:\n> >\n> > For example, yesterday when I checked the database size on the production\n> > server it was 30GB, and the restored dump of that database was only 17GB.\n> > The most interesting thing is that the data wasn't bloated that much, but\n> > the indices were. Some of them were a few times bigger than they should\n> be.\n> > For example an index on the production db is 440MB, while that same index\n> > after dump/restore is 17MB, and there are many indices with that high\n> > difference.\n>\n> Could your pattern of deletions be leaving sparsely populated, but not\n> completely empty, index pages; which your insertions will then never\n> reuse because they never again insert values in that key range?\n>\n> Cheers,\n>\n> Jeff\n>\n\nThanks for the help everyone and sorry for not replying sooner, I was on a business trip.@Hubert pg_reorg looks really interesting and from the first read it looks to be a very good solution for maintenance, but for now I would rather try to slow down, or remove this bloat, so I have to do as less maintenance as possible.\n@Mark So basically I should decrease the autovacuum nap time from 60s to 10s, reduce the scale factor from 0.2 to 0.1. log_autovacuum_min_duration is already set to 0, which means everything is logged.\n@Jeff I'm not sure if I understand what you mean? I know that we never reuse key ranges. Could you be more clear, or give an example please.\nThanks in advance,\nStrahinja\n\nOn Tue, Aug 14, 2012 at 6:14 AM, Jeff Janes <[email protected]> wrote:\nOn Fri, Aug 10, 2012 at 3:15 PM, Strahinja Kustudić\n<[email protected]> wrote:\n>\n> For example, yesterday when I checked the database size on the production\n> server it was 30GB, and the restored dump of that database was only 17GB.\n> The most interesting thing is that the data wasn't bloated that much, but\n> the indices were. Some of them were a few times bigger than they should be.\n> For example an index on the production db is 440MB, while that same index\n> after dump/restore is 17MB, and there are many indices with that high\n> difference.\n\nCould your pattern of deletions be leaving sparsely populated, but not\ncompletely empty, index pages; which your insertions will then never\nreuse because they never again insert values in that key range?\n\nCheers,\n\nJeff",
"msg_date": "Thu, 16 Aug 2012 21:57:41 +0200",
"msg_from": "=?ISO-8859-2?Q?Strahinja_Kustudi=E6?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index Bloat Problem"
},
{
"msg_contents": "On Thu, Aug 16, 2012 at 12:57 PM, Strahinja Kustudić\n<[email protected]> wrote:\n>\n> @Jeff I'm not sure if I understand what you mean? I know that we never reuse\n> key ranges. Could you be more clear, or give an example please.\n\nIf an index leaf page is completely empty because every entry on it\nwere deleted, it will get recycled to be used in some other part of\nthe index. (Eventually--it can take a while, especially if you have\nlong-running transactions).\n\nBut if the leaf page is only mostly empty, because only most of\nentries on it were deleted, than it can never be reused, except for\nentries that naturally fall into its existing key range (which will\nnever happen, if you never reuse key ranges)\n\nSo if you have a million records with keys 1..1000000, and do a\n\"delete from foo where key between 1 and 990000\", then 99% of those\nold index pages will become completely empty and eligible for reuse.\nBut if you do \"delete from foo where key%100>0\", then all of the pages\nwill become 99% empty, and none will be eligible for reuse (except the\nvery last one, which can still accept 1000001 and so on)\n\nThere has been talk of allowing logically adjacent, mostly empty\npages to be merged so that one of them becomes empty, but the way\nconcurrent access to btree indexes was designed this is extremely hard\nto do safely.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 17 Aug 2012 19:33:38 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Bloat Problem"
},
{
"msg_contents": "Thanks for this description--we have index bloat problems on a massively active (but small) database.This may help shed light on our problems.\n\nSorry for top-posting--challenged email reader.\n\nGreg W.\n\n\n\n>________________________________\n> From: Jeff Janes <[email protected]>\n>To: Strahinja Kustudić <[email protected]> \n>Cc: [email protected] \n>Sent: Friday, August 17, 2012 7:33 PM\n>Subject: Re: [PERFORM] Index Bloat Problem\n> \n>On Thu, Aug 16, 2012 at 12:57 PM, Strahinja Kustudić\n><[email protected]> wrote:\n>>\n>> @Jeff I'm not sure if I understand what you mean? I know that we never reuse\n>> key ranges. Could you be more clear, or give an example please.\n>\n>If an index leaf page is completely empty because every entry on it\n>were deleted, it will get recycled to be used in some other part of\n>the index. (Eventually--it can take a while, especially if you have\n>long-running transactions).\n>\n>But if the leaf page is only mostly empty, because only most of\n>entries on it were deleted, than it can never be reused, except for\n>entries that naturally fall into its existing key range (which will\n>never happen, if you never reuse key ranges)\n>\n>So if you have a million records with keys 1..1000000, and do a\n>\"delete from foo where key between 1 and 990000\", then 99% of those\n>old index pages will become completely empty and eligible for reuse.\n>But if you do \"delete from foo where key%100>0\", then all of the pages\n>will become 99% empty, and none will be eligible for reuse (except the\n>very last one, which can still accept 1000001 and so on)\n>\n>There has been talk of allowing logically adjacent, mostly empty\n>pages to be merged so that one of them becomes empty, but the way\n>concurrent access to btree indexes was designed this is extremely hard\n>to do safely.\n>\n>Cheers,\n>\n>Jeff\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\nThanks for this description--we have index bloat problems on a massively active (but small) database.This may help shed light on our problems.Sorry for top-posting--challenged email reader.Greg W. From: Jeff Janes <[email protected]> To: Strahinja Kustudić <[email protected]> Cc: [email protected] Sent: Friday, August 17, 2012 7:33 PM Subject: Re: [PERFORM] Index Bloat Problem \nOn Thu, Aug 16, 2012 at 12:57 PM, Strahinja Kustudić<[email protected]> wrote:>> @Jeff I'm not sure if I understand what you mean? I know that we never reuse> key ranges. Could you be more clear, or give an example please.If an index leaf page is completely empty because every entry on itwere deleted, it will get recycled to be used in some other part ofthe index. (Eventually--it can take a while, especially if you havelong-running transactions).But if the leaf page is only mostly empty, because only most ofentries on it were deleted, than it can never be reused, except forentries that naturally fall into its existing key range (which willnever happen, if you never reuse key ranges)So if you have a million records with keys 1..1000000, and do a\"delete from foo where key between\n 1 and 990000\", then 99% of thoseold index pages will become completely empty and eligible for reuse.But if you do \"delete from foo where key%100>0\", then all of the pageswill become 99% empty, and none will be eligible for reuse (except thevery last one, which can still accept 1000001 and so on)There has been talk of allowing logically adjacent, mostly emptypages to be merged so that one of them becomes empty, but the wayconcurrent access to btree indexes was designed this is extremely hardto do safely.Cheers,Jeff-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 18 Aug 2012 01:01:44 -0700 (PDT)",
"msg_from": "Greg Williamson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Bloat Problem"
}
] |
[
{
"msg_contents": "Hi there!\n\nWe currently have a database table that's laid out something like this:\n id int\n date1 date\n belongs_to date\n type varchar(1)\n type_fk int\n start_time time\n end_time time\n location_fk int\n department_fk int\n value decimal\n\nWhere each row represents some data throughout the day (96 data points for\neach 15-minute period) - and each \"type_fk\", department, and location can\nhave up to say, 3 rows for a given start / end time and date (based on the\n\"type\").\n\nThis table has rapidly grown - we're adding about 1 - 2 million rows per\nmonth - and almost all of our queries actually sum up the values based on\nthe belongs_to date and the location_id, however, for other statistics we\nneed to keep the values separate. The db is now more than 60% of our\ndatabase, and we want to come up with a better way to store it. (To speed\nup other queries, we actually roll this table up into a daily table).\n\nWe're considering changing the structure of this table into one of the\nfollowing structures:\n\nOption [A]:\n id int\n date1 date\n belongs_to date\n type_fk int\n location_fk int\n department_fk int\n value_type1_0 decimal\n ....\n value_type1_96 decimal\n value_type2_0 decimal\n ....\n value_type2_96 decimal\n value_type3_0 decimal\n ....\n value_type3_96 decimal\n\nor, as an alternative:\n\nOption [B]:\n id int\n date1 date\n belongs_to date\n type varchar(1)\n type_fk int\n location_fk int\n department_fk int\n value_type_0 decimal\n ....\n value_type_96 decimal\n\nWe're having a hard time choosing between the two options. We'll\ndefinitely partition either one by the date or belongs_to column to speed\nup the queries.\n\nOption A would mean that any given date would only have a single row, with\nall three \"types\". However, this table would have 6+96*3 columns, and in\nmany cases at least 96 of those columns would be empty. More often than\nnot, however, at least half of the columns would be empty (most location's\naren't open all day).\n\nOption B would only create rows if the type had data in it, but the other 6\ncolumns would be redundant. Again, many of the columns might be empty.\n\n... From a space / size perspective, which option is a better choice?\n\nHow does PostgreSQL handle storing empty columns?\n\nThanks!\n\n--\nAnthony\n\nHi there!We currently have a database table that's laid out something like this: id int date1 date belongs_to date type varchar(1) type_fk int\n start_time time end_time time location_fk int department_fk int value decimalWhere each row represents some data throughout the day (96 data points for each 15-minute period) - and each \"type_fk\", department, and location can have up to say, 3 rows for a given start / end time and date (based on the \"type\").\nThis table has rapidly grown - we're adding about 1 - 2 million rows per month - and almost all of our queries actually sum up the values based on the belongs_to date and the location_id, however, for other statistics we need to keep the values separate. The db is now more than 60% of our database, and we want to come up with a better way to store it. (To speed up other queries, we actually roll this table up into a daily table).\nWe're considering changing the structure of this table into one of the following structures:Option [A]: id int date1 date belongs_to date\n type_fk int location_fk int department_fk int value_type1_0 decimal .... value_type1_96 decimal value_type2_0 decimal .... \n value_type2_96 decimal value_type3_0 decimal .... value_type3_96 decimalor, as an alternative:Option [B]:\n id int date1 date belongs_to date type varchar(1) type_fk int location_fk int department_fk int value_type_0 decimal .... \n value_type_96 decimalWe're having a hard time choosing between the two options. We'll definitely partition either one by the date or belongs_to column to speed up the queries.\nOption A would mean that any given date would only have a single row, with all three \"types\". However, this table would have 6+96*3 columns, and in many cases at least 96 of those columns would be empty. More often than not, however, at least half of the columns would be empty (most location's aren't open all day).\nOption B would only create rows if the type had data in it, but the other 6 columns would be redundant. Again, many of the columns might be empty.... From a space / size perspective, which option is a better choice?\nHow does PostgreSQL handle storing empty columns?Thanks!--Anthony",
"msg_date": "Sat, 11 Aug 2012 09:23:48 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improve DB Size / Performance with Table Refactoring"
}
] |
[
{
"msg_contents": "Consider this EXPLAIN ANALYZE output:\n\n\thttp://explain.depesz.com/s/TCi\n\nNote the Bitmap Heap Scan at the bottom claims to be producing 7094 rows, and the Sort above it expects to be processing 7330 rows (the same number the Bitmap Heap Scan expected to produce)... but the sort is actually producing 4512231 rows, which the sort time would indicate is what really happened. How can this be?\n\n--\n-- Christophe Pettus\n [email protected]\n\n\n",
"msg_date": "Mon, 13 Aug 2012 18:15:26 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": true,
"msg_subject": "7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "On 14 Srpen 2012, 3:15, Christophe Pettus wrote:\n> Consider this EXPLAIN ANALYZE output:\n>\n> \thttp://explain.depesz.com/s/TCi\n>\n> Note the Bitmap Heap Scan at the bottom claims to be producing 7094 rows,\n> and the Sort above it expects to be processing 7330 rows (the same number\n> the Bitmap Heap Scan expected to produce)... but the sort is actually\n> producing 4512231 rows, which the sort time would indicate is what really\n> happened. How can this be?\n\nHi,\n\nnotice there's a merge join right above the sort. If there are duplicate\nvalues in the first table (charlie in the explain plans), the matching\nrows from the sort will be read repeatedly (re-scanned) and thus counted\nmultiple times.\n\nTomas\n\n\n",
"msg_date": "Tue, 14 Aug 2012 03:35:21 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "\nOn Aug 13, 2012, at 6:35 PM, Tomas Vondra wrote:\n\n> On 14 Srpen 2012, 3:15, Christophe Pettus wrote:\n>> Consider this EXPLAIN ANALYZE output:\n>> \n>> \thttp://explain.depesz.com/s/TCi\n>> \n>> Note the Bitmap Heap Scan at the bottom claims to be producing 7094 rows,\n>> and the Sort above it expects to be processing 7330 rows (the same number\n>> the Bitmap Heap Scan expected to produce)... but the sort is actually\n>> producing 4512231 rows, which the sort time would indicate is what really\n>> happened. How can this be?\n> \n> Hi,\n> \n> notice there's a merge join right above the sort. If there are duplicate\n> values in the first table (charlie in the explain plans), the matching\n> rows from the sort will be read repeatedly (re-scanned) and thus counted\n> multiple times.\n\nThanks, that makes sense. Something a colleague of mine just noticed is that the estimate cost of the Index Scan node isn't being included in the cost of the Merge Join above it, which makes the Merge Join seem much cheaper than it really is. Could this be a planner bug?\n\n--\n-- Christophe Pettus\n [email protected]\n\n\n",
"msg_date": "Mon, 13 Aug 2012 18:48:48 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "Christophe Pettus <[email protected]> writes:\n> Thanks, that makes sense. Something a colleague of mine just noticed is that the estimate cost of the Index Scan node isn't being included in the cost of the Merge Join above it, which makes the Merge Join seem much cheaper than it really is. Could this be a planner bug?\n\nNo, that looks sane. It's probably expecting that the range of keys on\nthe right-hand side is a lot less than the range of keys on the left,\nand thus the merge won't have to read all of the left side. Since the\noutput shows an estimated total number of rows in the LHS of 84 million,\nbut the join stopped after reading 20 million of them, it looks like\nthat effect did in fact occur. If the planner had that fraction dead\non, it would only have charged the mergejoin with a quarter of the\nindexscan's total estimated cost. It's hard to tell though exactly what\nit did think.\n\nThe whole thing looks a bit weird to me --- why did it not use a\nnestloop join with inner indexscan on charlie? With 7000 rows on the\nother side, the estimated cost for that shouldn't have been more than\nabout 30000 ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 13 Aug 2012 22:11:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "\nOn Aug 13, 2012, at 7:11 PM, Tom Lane wrote:\n> The whole thing looks a bit weird to me --- why did it not use a\n> nestloop join with inner indexscan on charlie? With 7000 rows on the\n> other side, the estimated cost for that shouldn't have been more than\n> about 30000 ...\n\nHere's the same query with set enable_megejoin = off. All of the other query tuning parameters are default.\n\n\thttp://explain.depesz.com/s/dqO\n\n--\n-- Christophe Pettus\n [email protected]\n\n\n",
"msg_date": "Wed, 15 Aug 2012 12:13:19 -0700",
"msg_from": "Christophe Pettus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "Christophe Pettus <[email protected]> writes:\n> On Aug 13, 2012, at 7:11 PM, Tom Lane wrote:\n>> The whole thing looks a bit weird to me --- why did it not use a\n>> nestloop join with inner indexscan on charlie? With 7000 rows on the\n>> other side, the estimated cost for that shouldn't have been more than\n>> about 30000 ...\n\n> Here's the same query with set enable_megejoin = off. All of the other query tuning parameters are default.\n\n> \thttp://explain.depesz.com/s/dqO\n\nMaybe you had better show us the actual query, and the table/index\ndefinitions. Because it's sure making odd choices here. This seems\nlike the wrong join order altogether ...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 15 Aug 2012 16:51:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "On 8/15/12 1:51 PM, Tom Lane wrote:\n> Christophe Pettus <[email protected]> writes:\n>> On Aug 13, 2012, at 7:11 PM, Tom Lane wrote:\n>>> The whole thing looks a bit weird to me --- why did it not use a\n>>> nestloop join with inner indexscan on charlie? With 7000 rows on the\n>>> other side, the estimated cost for that shouldn't have been more than\n>>> about 30000 ...\n> \n>> Here's the same query with set enable_megejoin = off. All of the other query tuning parameters are default.\n> \n>> \thttp://explain.depesz.com/s/dqO\n> \n> Maybe you had better show us the actual query, and the table/index\n> definitions. Because it's sure making odd choices here. This seems\n> like the wrong join order altogether ...\n\nWe'll need to do that off-list for confidentiality reasons.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Wed, 15 Aug 2012 14:02:35 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> On 8/15/12 1:51 PM, Tom Lane wrote:\n>> Maybe you had better show us the actual query, and the table/index\n>> definitions. Because it's sure making odd choices here. This seems\n>> like the wrong join order altogether ...\n\n> We'll need to do that off-list for confidentiality reasons.\n\nIf you can show us the anonymized query plan, why not the anonymized query?\nIt doesn't look like it could be all that complicated.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 15 Aug 2012 17:44:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "\n> If you can show us the anonymized query plan, why not the anonymized query?\n> It doesn't look like it could be all that complicated.\n\nIt's not:\n\nSELECT COUNT(*)\nFROM \"user\"\nINNER JOIN \"house\"\n ON (\"user\".\"house_id\" = \"house\".\"id\")\nLEFT OUTER JOIN \"district\"\n ON (\"house\".\"district_id\" = \"district\".\"id\")\nWHERE (\"user\".\"status\" = 0\n AND (\"district\".\"update_status\" = 2\n OR \"district\".\"update_status\" = 3 )\n AND (\"user\".\"valid\" = 1\n OR \"user\".\"valid\" = 3 )\n AND \"district\".\"is_test\" = false );\n\nHowever, since the anonymization above doesn't quite match that used in\nthe EXPLAIN plan, I'm not sure what you'll get out of it. And yes, we\nknow that the outer join is being invalidated.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Wed, 15 Aug 2012 17:25:06 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> SELECT COUNT(*)\n> FROM \"user\"\n> INNER JOIN \"house\"\n> ON (\"user\".\"house_id\" = \"house\".\"id\")\n> LEFT OUTER JOIN \"district\"\n> ON (\"house\".\"district_id\" = \"district\".\"id\")\n> WHERE (\"user\".\"status\" = 0\n> AND (\"district\".\"update_status\" = 2\n> OR \"district\".\"update_status\" = 3 )\n> AND (\"user\".\"valid\" = 1\n> OR \"user\".\"valid\" = 3 )\n> AND \"district\".\"is_test\" = false );\n\n> However, since the anonymization above doesn't quite match that used in\n> the EXPLAIN plan, I'm not sure what you'll get out of it. And yes, we\n> know that the outer join is being invalidated.\n\nAh, I see where I was confused: in the original query plan I'd been\nimagining that charlie.sierra was a unique column, but your gloss on\nthat as being house.district_id implies that it's highly non-unique.\nAnd looking at the rowcounts in the original plan backs that up:\nthere are about 600 house rows per district row. So my thought of\nhaving district as the outer side of a nestloop scanning the index\non house.district_id would not really work very well --- maybe it\nwould end up cheaper than the mergejoin plan, but it's far from a\nclear-cut win.\n\nOn the whole I'm thinking the code is operating as designed here.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 16 Aug 2012 02:01:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "\n> Ah, I see where I was confused: in the original query plan I'd been\n> imagining that charlie.sierra was a unique column, but your gloss on\n> that as being house.district_id implies that it's highly non-unique.\n> And looking at the rowcounts in the original plan backs that up:\n> there are about 600 house rows per district row. So my thought of\n> having district as the outer side of a nestloop scanning the index\n> on house.district_id would not really work very well --- maybe it\n> would end up cheaper than the mergejoin plan, but it's far from a\n> clear-cut win.\n> \n> On the whole I'm thinking the code is operating as designed here.\n\nWell, except for the part where it's choosing a plan which takes 486\nseconds over a plan which takes 4 seconds.\n\nI guess what I'm really not understanding is why it's calculating a cost\nof 3.7m for the index scan, and then discarding that *entire* cost and\nnot including it in the total cost of the query? This seems wrong,\nespecially since that index scan, in fact, ends up being 85% of the\nexecution time of the query:\n\n Merge Join (cost=7457.670..991613.190 rows=1092168 width=4) (actual\ntime=57.854..481062.706 rows=4514968 loops=1)\n\n Merge Cond: (charlie.sierra = four.quebec_seven)\n\nIndex Scan using whiskey_delta on charlie (cost=0.000..3775171.860\nrows=84904088 width=8) (actual time=0.006..459501.341 rows=20759070 loops=1)\n\nIf the cost of the index scan were included in the total cost of the\nquery plan, then the planner *would* chose the nestloop plan instead.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Sun, 19 Aug 2012 12:53:37 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> I guess what I'm really not understanding is why it's calculating a cost\n> of 3.7m for the index scan, and then discarding that *entire* cost and\n> not including it in the total cost of the query?\n\nIt isn't ... or at least, you've offered no evidence that it is.\nIt's discounting some fraction of the cost on the (apparently correct)\nbasis that the merge won't read that input all the way to the end.\nWhether it's discounted by an appropriate fraction is hard to tell\nfrom the information given. The actual rows count is about a quarter\nthe whole-scan estimate, so a multiplier of 0.25 seems right in\nhindsight, and that seems to match up roughly right with the mergejoin\ncost estimate --- but not knowing the actual table size, there's a lot\nof uncertainty here.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 20 Aug 2012 11:40:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
},
{
"msg_contents": "\n> It isn't ... or at least, you've offered no evidence that it is.\n\nSorry, I thought Christophe had sent you the details offlist. Checking ...\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Mon, 20 Aug 2012 10:47:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7k records into Sort node, 4.5m out?"
}
] |
[
{
"msg_contents": "According to the docs on cluster:\nif you tend to access some data more than others, and there is an\nindex that groups them together, you will benefit from using CLUSTER\n\nhowever, this doesn't address the situation where you have a\nconditional index. For example, we have certain large tables that have\na column called 'is_deleted'. It's a boolean, to indicate whether the\nrecord is 'deleted' as far as the app is concerned. Since the app only\never shows data where is_deleted is false, I created an index:\ncreate index foo on bar where is_deleted is false;\nand now I'm wondering if clustering on this index will bring the\nbenefits noted above or if I should rebuild my index w/o the where\nclause to obtain the best 'improvement' from cluster.\n\nAnyone know?\n\n-- \nDouglas J Hunley ([email protected])\nTwitter: @hunleyd Web:\ndouglasjhunley.com\nG+: http://goo.gl/sajR3\n\n",
"msg_date": "Tue, 14 Aug 2012 11:27:18 -0400",
"msg_from": "Doug Hunley <[email protected]>",
"msg_from_op": true,
"msg_subject": "cluster on conditional index?"
},
{
"msg_contents": "On Tue, Aug 14, 2012 at 8:27 AM, Doug Hunley <[email protected]> wrote:\n> According to the docs on cluster:\n> if you tend to access some data more than others, and there is an\n> index that groups them together, you will benefit from using CLUSTER\n>\n> however, this doesn't address the situation where you have a\n> conditional index.\n\nIt seems like it is not allowed.\n\njjanes=# create index on pgbench_accounts (aid) where bid=33;\njjanes=# cluster pgbench_accounts USING pgbench_accounts_aid_idx ;\nERROR: cannot cluster on partial index \"pgbench_accounts_aid_idx\"\n\nBut I don't see a fundamental reason it can't be allowed, maybe\nimplementing that should be on the to-do list.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Tue, 14 Aug 2012 10:10:47 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cluster on conditional index?"
},
{
"msg_contents": "On Tue, Aug 14, 2012 at 10:10:47AM -0700, Jeff Janes wrote:\n> On Tue, Aug 14, 2012 at 8:27 AM, Doug Hunley <[email protected]> wrote:\n> > According to the docs on cluster:\n> > if you tend to access some data more than others, and there is an\n> > index that groups them together, you will benefit from using CLUSTER\n> >\n> > however, this doesn't address the situation where you have a\n> > conditional index.\n> \n> It seems like it is not allowed.\n> \n> jjanes=# create index on pgbench_accounts (aid) where bid=33;\n> jjanes=# cluster pgbench_accounts USING pgbench_accounts_aid_idx ;\n> ERROR: cannot cluster on partial index \"pgbench_accounts_aid_idx\"\n> \n> But I don't see a fundamental reason it can't be allowed, maybe\n> implementing that should be on the to-do list.\n> \n> Cheers,\n> \n> Jeff\n> \n\nIt probably has to do with the fact that a conditional index, does\nnot include every possible row in the table. Although, a \"cluster\" of\nthe matching rows and then leave the rest in place, should work. How\nis that for hand-waving. :)\n\nRegards,\nKen\n\n",
"msg_date": "Tue, 14 Aug 2012 12:29:10 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cluster on conditional index?"
},
{
"msg_contents": "On Tue, Aug 14, 2012 at 1:29 PM, [email protected] <[email protected]> wrote:\n>\n> It probably has to do with the fact that a conditional index, does\n> not include every possible row in the table. Although, a \"cluster\" of\n> the matching rows and then leave the rest in place, should work. How\n> is that for hand-waving. :)\n>\n\nThat actually makes sense to me. Cluster the rows covered by that\nindex, let the rest fall where they may. I'm typically only accessing\nthe rows covered by that index, so I'd get the benefit of the cluster\ncommand but wouldn't have to spend cycles doing the cluster for rows I\ndon't care about.\n\n\n-- \nDouglas J Hunley ([email protected])\nTwitter: @hunleyd Web:\ndouglasjhunley.com\nG+: http://goo.gl/sajR3\n\n",
"msg_date": "Wed, 15 Aug 2012 09:43:07 -0400",
"msg_from": "Doug Hunley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cluster on conditional index?"
},
{
"msg_contents": "\n> That actually makes sense to me. Cluster the rows covered by that\n> index, let the rest fall where they may. I'm typically only accessing\n> the rows covered by that index, so I'd get the benefit of the cluster\n> command but wouldn't have to spend cycles doing the cluster for rows I\n> don't care about.\n\nSure, that's a feature request though. And thinking about it, I'm\nwilling to bet that it's far harder to implement than it sounds.\n\nIn the meantime, you could ad-hoc this by splitting the table into two\npartitions and clustering one of the two partitions.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n",
"msg_date": "Wed, 15 Aug 2012 14:05:18 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cluster on conditional index?"
},
{
"msg_contents": "On 08/15/12 14:05, Josh Berkus wrote:\n> \n>> That actually makes sense to me. Cluster the rows covered by that\n>> index, let the rest fall where they may. I'm typically only accessing\n>> the rows covered by that index, so I'd get the benefit of the cluster\n>> command but wouldn't have to spend cycles doing the cluster for rows I\n>> don't care about.\n> \n> Sure, that's a feature request though. And thinking about it, I'm\n> willing to bet that it's far harder to implement than it sounds.\n> \n> In the meantime, you could ad-hoc this by splitting the table into two\n> partitions and clustering one of the two partitions.\n\nWouldn't creating a second index on the boolean itself and then clustering\non that be much easier?\n\nBosco.\n\n",
"msg_date": "Wed, 15 Aug 2012 14:19:08 -0700",
"msg_from": "Bosco Rama <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cluster on conditional index?"
},
{
"msg_contents": "On Wed, Aug 15, 2012 at 5:19 PM, Bosco Rama <[email protected]> wrote:\n> On 08/15/12 14:05, Josh Berkus wrote:\n>>\n>>> That actually makes sense to me. Cluster the rows covered by that\n>>> index, let the rest fall where they may. I'm typically only accessing\n>>> the rows covered by that index, so I'd get the benefit of the cluster\n>>> command but wouldn't have to spend cycles doing the cluster for rows I\n>>> don't care about.\n>>\n>> Sure, that's a feature request though. And thinking about it, I'm\n>> willing to bet that it's far harder to implement than it sounds.\n\nHow/where does file feature requests?\n\n>>\n>> In the meantime, you could ad-hoc this by splitting the table into two\n>> partitions and clustering one of the two partitions.\n>\n> Wouldn't creating a second index on the boolean itself and then clustering\n> on that be much easier?\n\nthat's what I was looking into doing actuallly\n\n-- \nDouglas J Hunley ([email protected])\nTwitter: @hunleyd Web:\ndouglasjhunley.com\nG+: http://goo.gl/sajR3\n\n",
"msg_date": "Thu, 16 Aug 2012 11:25:48 -0400",
"msg_from": "Doug Hunley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cluster on conditional index?"
},
{
"msg_contents": "On Wed, Aug 15, 2012 at 6:43 AM, Doug Hunley <[email protected]> wrote:\n> On Tue, Aug 14, 2012 at 1:29 PM, [email protected] <[email protected]> wrote:\n>>\n>> It probably has to do with the fact that a conditional index, does\n>> not include every possible row in the table. Although, a \"cluster\" of\n>> the matching rows and then leave the rest in place, should work. How\n>> is that for hand-waving. :)\n>>\n>\n> That actually makes sense to me. Cluster the rows covered by that\n> index, let the rest fall where they may. I'm typically only accessing\n> the rows covered by that index, so I'd get the benefit of the cluster\n> command but wouldn't have to spend cycles doing the cluster for rows I\n> don't care about.\n\nIIRC, there isn't currently an in-place version of CLUSTER, it always\nrewrites the entire table. So it would still have to do something\nwith those rows, so that they show up in the new table. But it could\njust treat them all as equal to one another and have them be in\nwhatever order they happen to fall in.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Thu, 16 Aug 2012 09:04:12 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cluster on conditional index?"
},
{
"msg_contents": "On Wed, Aug 15, 2012 at 2:19 PM, Bosco Rama <[email protected]> wrote:\n> On 08/15/12 14:05, Josh Berkus wrote:\n>>\n>>> That actually makes sense to me. Cluster the rows covered by that\n>>> index, let the rest fall where they may. I'm typically only accessing\n>>> the rows covered by that index, so I'd get the benefit of the cluster\n>>> command but wouldn't have to spend cycles doing the cluster for rows I\n>>> don't care about.\n>>\n>> Sure, that's a feature request though. And thinking about it, I'm\n>> willing to bet that it's far harder to implement than it sounds.\n>>\n>> In the meantime, you could ad-hoc this by splitting the table into two\n>> partitions and clustering one of the two partitions.\n>\n> Wouldn't creating a second index on the boolean itself and then clustering\n> on that be much easier?\n\nI would take an existing useful index, and build a new one on the same\ncolumns but with is_deleted prepended.\n\nThat way, since you are going through the effort to rewrite the whole\ntable anyway, the ties are broken in a way that might be of further\nuse.\n\nOnce the CLUSTER is done, the index might even be useful enough to\nkeep around for use with queries, or even replace the original index\naltogether.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Thu, 16 Aug 2012 09:16:45 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cluster on conditional index?"
}
] |
[
{
"msg_contents": "Hi,\n\nMy application has high data intensive operations (high number of inserts\n1500 per sec.). I switched my application from MySQL to PostgreSQL. When I\ntake performance comparison report between mysql and pgsql, I found that,\nthere are huge difference in disk writes and disk space taken. Below stats\nshows the difference between MySQL and PostgreSQL.\n\n\n*MySQL**PostgreSQL*Inserts Per Second*15001500Updates Per Second*6.56.5Disk\nWrite Per Second*0.9 MB6.2 MBDatabase Size Increased Per day*13 GB36 GB\n* approx values\n\nWhy this huge difference in disk writes and disk space utilization? How can\nI reduce the disk write and space ? Kindly help me. Please let me know, if\nyou require any other information(such as postgres.conf).\n\nThanks,\nRamesh\n\nHi,\nMy application has high data intensive operations (high number of inserts 1500 per sec.). I switched my application from MySQL to PostgreSQL. When I take performance comparison report between mysql and pgsql, I found that, there are huge difference in disk writes and disk space taken. Below stats shows the difference between MySQL and PostgreSQL.\n\nMySQLPostgreSQLInserts Per Second*15001500\nUpdates Per Second*6.56.5Disk Write Per Second*\n0.9 MB6.2 MBDatabase Size Increased Per day*\n13 GB36 GB* approx values\nWhy this huge difference in disk writes and disk space utilization? How can I reduce the disk write and space ? Kindly help me. Please let me know, if you require any other information(such as postgres.conf).\n\nThanks,Ramesh",
"msg_date": "Thu, 16 Aug 2012 08:53:06 +0530",
"msg_from": "J Ramesh Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "Hi Ramesh,\n\nAre you able to provide a table schema? Were you using MyISAM or InnoDB \non MySQL?\n\nIf you back up the database & restore clean, what is the size comparison \nof the database filed on the restored copy to the existing one? It may \nbe full of empty tuples. Is there any period where you could try a full \nvacuum?\n\nWhat are your indexes? Is the size in the indexes or the database tables?\n\nAt the current rate of insertion, that table is going to get very large \nvery quickly. Do you have anything deleting the rows afterwards? I \nhave no experience with databases past 50M rows, so my questions are \njust so you can line up the right info for when the real experts get \nonline :-)\n\nRegards, David\n\nOn 16/08/12 11:23, J Ramesh Kumar wrote:\n>\n> Hi,\n>\n> My application has high data intensive operations (high number of \n> inserts 1500 per sec.). I switched my application from MySQL to \n> PostgreSQL. When I take performance comparison report between mysql \n> and pgsql, I found that, there are huge difference in disk writes and \n> disk space taken. Below stats shows the difference between MySQL and \n> PostgreSQL.\n>\n>\n> \t*MySQL* \t*PostgreSQL*\n> Inserts Per Second* \t1500 \t1500\n> Updates Per Second* \t6.5 \t6.5\n> Disk Write Per Second* \t0.9 MB \t6.2 MB\n> Database Size Increased Per day* \t13 GB \t36 GB\n>\n>\n> * approx values\n>\n> Why this huge difference in disk writes and disk space utilization? \n> How can I reduce the disk write and space ? Kindly help me. Please let \n> me know, if you require any other information(such as postgres.conf).\n>\n> Thanks,\n> Ramesh\n\n\n\n\n\n\n\n Hi Ramesh,\n\n Are you able to provide a table schema? Were you using MyISAM or\n InnoDB on MySQL?\n\n If you back up the database & restore clean, what is the size\n comparison of the database filed on the restored copy to the\n existing one? It may be full of empty tuples. Is there any period\n where you could try a full vacuum?\n\n What are your indexes? Is the size in the indexes or the database\n tables?\n\n At the current rate of insertion, that table is going to get very\n large very quickly. Do you have anything deleting the rows\n afterwards? I have no experience with databases past 50M rows, so\n my questions are just so you can line up the right info for when the\n real experts get online :-)\n\n Regards, David\n\nOn 16/08/12 11:23, J Ramesh Kumar\n wrote:\n\n\nHi,\n\n\nMy\n application has high data intensive operations (high number of\n inserts 1500 per sec.). I switched my application from MySQL to\n PostgreSQL. When I take performance comparison report between\n mysql and pgsql, I found that, there are huge difference in disk\n writes and disk space taken. Below stats shows the difference\n between MySQL and PostgreSQL.\n\n\n\n\n\n\nMySQL\nPostgreSQL\n\n\nInserts Per Second*\n1500\n1500\n\n\nUpdates\n Per Second*\n6.5\n6.5\n\n\nDisk\n Write Per Second*\n0.9 MB\n6.2 MB\n\n\nDatabase\n Size Increased Per day*\n\n 13 GB\n36 GB\n\n\n\n\n\n*\n approx values\n\n\n\nWhy\n this huge difference in disk writes and disk space utilization?\n How can I reduce the disk write and space ? Kindly help me.\n Please let me know, if you require any other information(such as\n postgres.conf).\n\n\nThanks,\nRamesh",
"msg_date": "Thu, 16 Aug 2012 11:36:53 +0800",
"msg_from": "David Barton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "Hi David Barton,\n\nPlease find the information below.\n\nAre you able to provide a table schema?\n\n\nThere are 109 different types of table. I am maintaining some tables are\ndaily tables and some tables are ID based. So totally we have created\naround 350 tables and dropped around 350 tables. I will drop the old table\nand I don't delete any records. I am maintaing only last 30 days tables. I\ndropped tables which are older than 30 days. All the tables are only have\nbasic data types like int, smallint, bigint, varchar.\n\n\n\n> Were you using MyISAM or InnoDB on MySQL?\n\n\nI am using MyISAM tables in MySQL.\n\n\nWhat are your indexes? Is the size in the indexes or the database tables?\n\n\nThe size I mentioned is the total folder size of the data directory. There\nis no difference in the database schema / index between MySQL and\nPostgreSQL.\n\nIf you back up the database & restore clean, what is the size comparison of\n> the database filed on the restored copy to the existing one?\n\n\nI don't take backup and restore.\n\n Is there any period where you could try a full vacuum?\n\n\nSince my app only doing inserts and drops(no delete), I believe the vacuum\nwill not give any advantage. So I have the below configuration in my\ndatabase. Event the updates only performed in a very small table which has\n5 int + 1 small int + 1 real fields.\n\n# To avoid freqent autovacuum\nautovacuum_freeze_max_age = 2000000000\nvacuum_freeze_min_age = 10000000\nvacuum_freeze_table_age = 150000000\n\nThanks,\nRamesh\n\nOn Thu, Aug 16, 2012 at 9:06 AM, David Barton <[email protected]> wrote:\n\n> Hi Ramesh,\n>\n> Are you able to provide a table schema? Were you using MyISAM or InnoDB\n> on MySQL?\n>\n> If you back up the database & restore clean, what is the size comparison\n> of the database filed on the restored copy to the existing one? It may be\n> full of empty tuples. Is there any period where you could try a full\n> vacuum?\n>\n> What are your indexes? Is the size in the indexes or the database tables?\n>\n> At the current rate of insertion, that table is going to get very large\n> very quickly. Do you have anything deleting the rows afterwards? I have\n> no experience with databases past 50M rows, so my questions are just so you\n> can line up the right info for when the real experts get online :-)\n>\n> Regards, David\n>\n>\n> On 16/08/12 11:23, J Ramesh Kumar wrote:\n>\n>\n> Hi,\n>\n> My application has high data intensive operations (high number of\n> inserts 1500 per sec.). I switched my application from MySQL to PostgreSQL.\n> When I take performance comparison report between mysql and pgsql, I found\n> that, there are huge difference in disk writes and disk space taken. Below\n> stats shows the difference between MySQL and PostgreSQL.\n>\n>\n> *MySQL* *PostgreSQL* Inserts Per Second* 1500 1500 Updates Per Second*\n> 6.5 6.5 Disk Write Per Second* 0.9 MB 6.2 MB Database Size Increased\n> Per day* 13 GB 36 GB\n> * approx values\n>\n> Why this huge difference in disk writes and disk space utilization? How\n> can I reduce the disk write and space ? Kindly help me. Please let me know,\n> if you require any other information(such as postgres.conf).\n>\n> Thanks,\n> Ramesh\n>\n>\n>\n\nHi David Barton,\nPlease find the information below. \nAre you able to provide a table schema? \nThere are 109 different types of table. I am maintaining some tables are daily tables and some tables are ID based. So totally we have created around 350 tables and dropped around 350 tables. I will drop the old table and I don't delete any records. I am maintaing only last 30 days tables. I dropped tables which are older than 30 days. All the tables are only have basic data types like int, smallint, bigint, varchar.\n \nWere you using MyISAM or InnoDB on MySQL?I am using MyISAM tables in MySQL. \nWhat are your indexes? Is the size in the indexes or the database tables?\nThe size I mentioned is the total folder size of the data directory. There is no difference in the database schema / index between MySQL and PostgreSQL.\nIf you back up the database & restore clean, what is the size comparison of the database filed on the restored copy to the existing one?\nI don't take backup and restore.\n Is there any period where you could try a full vacuum?Since my app only doing inserts and drops(no delete), I believe the vacuum will not give any advantage. So I have the below configuration in my database. Event the updates only performed in a very small table which has 5 int + 1 small int + 1 real fields. \n# To avoid freqent autovacuumautovacuum_freeze_max_age = 2000000000vacuum_freeze_min_age = 10000000vacuum_freeze_table_age = 150000000 Thanks,\nRameshOn Thu, Aug 16, 2012 at 9:06 AM, David Barton <[email protected]> wrote:\n\n Hi Ramesh,\n\n Are you able to provide a table schema? Were you using MyISAM or\n InnoDB on MySQL?\n\n If you back up the database & restore clean, what is the size\n comparison of the database filed on the restored copy to the\n existing one? It may be full of empty tuples. Is there any period\n where you could try a full vacuum?\n\n What are your indexes? Is the size in the indexes or the database\n tables?\n\n At the current rate of insertion, that table is going to get very\n large very quickly. Do you have anything deleting the rows\n afterwards? I have no experience with databases past 50M rows, so\n my questions are just so you can line up the right info for when the\n real experts get online :-)\n\n Regards, David\n\nOn 16/08/12 11:23, J Ramesh Kumar\n wrote:\n\n\nHi,\n\n\nMy\n application has high data intensive operations (high number of\n inserts 1500 per sec.). I switched my application from MySQL to\n PostgreSQL. When I take performance comparison report between\n mysql and pgsql, I found that, there are huge difference in disk\n writes and disk space taken. Below stats shows the difference\n between MySQL and PostgreSQL.\n\n\n\n\n\n\nMySQL\nPostgreSQL\n\n\nInserts Per Second*\n1500\n1500\n\n\nUpdates\n Per Second*\n6.5\n6.5\n\n\nDisk\n Write Per Second*\n0.9 MB\n6.2 MB\n\n\nDatabase\n Size Increased Per day*\n\n 13 GB\n36 GB\n\n\n\n\n\n*\n approx values\n\n\n\nWhy\n this huge difference in disk writes and disk space utilization?\n How can I reduce the disk write and space ? Kindly help me.\n Please let me know, if you require any other information(such as\n postgres.conf).\n\n\nThanks,\nRamesh",
"msg_date": "Thu, 16 Aug 2012 10:00:47 +0530",
"msg_from": "J Ramesh Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "Please use plain text on the list, some folks don't have mail readers\nthat can handle html easily.\n\nOn Wed, Aug 15, 2012 at 10:30 PM, J Ramesh Kumar <[email protected]> wrote:\n>\n> Hi David Barton,\n>\n> Please find the information below.\n>\n>> Are you able to provide a table schema?\n>\n>\n> There are 109 different types of table. I am maintaining some tables are\n> daily tables and some tables are ID based. So totally we have created around\n> 350 tables and dropped around 350 tables. I will drop the old table and I\n> don't delete any records. I am maintaing only last 30 days tables. I dropped\n> tables which are older than 30 days. All the tables are only have basic data\n> types like int, smallint, bigint, varchar.\n>\n>\n>>\n>> Were you using MyISAM or InnoDB on MySQL?\n>\n>\n> I am using MyISAM tables in MySQL.\n\nWell that explains a lot. MyISAM is not transaction or crash safe.\nOn a machine with decent hardware (i.e. it doesn't lie about fsync)\nyou can pull the plugs out the back of your postgresql server and any\ncommitted transactions will still be there. Your myisam tables in\nmysql will be corrupted and data may or may not be there that you\ninserted.\n\nMyISAM is great if your data is easily reproduceable or not that\nimportant. If it's important etc then it's not such a great choice.\n\nBecause of the overhead of being transactionally safe, postgresql\nactually writes everything twice, once to a write ahead log, and then\nflushed out to the actual tables. It is quite likely that at your\nvery high write rate you have a LOT of transactional logs.\n\n>> If you back up the database & restore clean, what is the size comparison\n>> of the database filed on the restored copy to the existing one?\n>\n>\n> I don't take backup and restore.\n\nThat's not the question. What David is wondering is if you have a lot\nof table bloat, for instance from a lot of updates or deletes.\nPostgreSQL uses an in-store MVCC system that can bloat your tables\nwith a lot of deletes / updates happening at once or really fast. So\nit's more of a troubleshooting suggestion. I'm guessing that since\nyou don't backup your data it's not that important, so mysql with\nmyisam may be a better choice in some ways.\n\nOTOH if you need to run complex reporting queries, MySQL's query\nplanner is dumb as a stump and will likely run very poorly or be\nmissing features postgresql has like CTEs and what not. Trade off,\nneither db is perfect for everything, but know that complex queries in\nmysql can often take many orders of magnitude longer than in pgsql.\n\n>> Is there any period where you could try a full vacuum?\n>\n>\n> Since my app only doing inserts and drops(no delete), I believe the vacuum\n> will not give any advantage. So I have the below configuration in my\n> database. Event the updates only performed in a very small table which has 5\n> int + 1 small int + 1 real fields.\n\nAhhh but updates are the basically delete / inserts in disguise, so if\nthere's enough, then yes, vacuum full would make a difference.\n\nBasically the difference you are seeing is the difference between a\ndatabase (postgresql) and a data store (mysql + myisam). I wonder\nwhat you'd see if you tried mysql with innodb tables, which are\ntransaction and crash safe like postgresql. I'm guessing there would\nbe something a bit closer to parity there.\n\n",
"msg_date": "Wed, 15 Aug 2012 22:39:18 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "Dear Scott Marlowe,\n\nThanks for the details.\n\nAs you said, MySQL with MyISAM is better choice for my app. Because I don't\nneed transaction/backup. May be I'll try with InnoDB and find the disk\nwrite/space difference. Is there any similar methods available in\npostgresql like MyISAM engine ?\n\n>>> Ahhh but updates are the basically delete / inserts in disguise, so\nif there's enough, then yes, vacuum full would make a difference.\n\nThe table which get update has very less data ie, only has 900 rows. Out of\n10500 tables, only one table is getting update frequently. Is there any way\nto vacuum a specific table instead of whole database ?\n\nThanks,\nRamesh\n\nOn Thu, Aug 16, 2012 at 10:09 AM, Scott Marlowe <[email protected]>wrote:\n\n> Please use plain text on the list, some folks don't have mail readers\n> that can handle html easily.\n>\n> On Wed, Aug 15, 2012 at 10:30 PM, J Ramesh Kumar <[email protected]>\n> wrote:\n> >\n> > Hi David Barton,\n> >\n> > Please find the information below.\n> >\n> >> Are you able to provide a table schema?\n> >\n> >\n> > There are 109 different types of table. I am maintaining some tables are\n> > daily tables and some tables are ID based. So totally we have created\n> around\n> > 350 tables and dropped around 350 tables. I will drop the old table and I\n> > don't delete any records. I am maintaing only last 30 days tables. I\n> dropped\n> > tables which are older than 30 days. All the tables are only have basic\n> data\n> > types like int, smallint, bigint, varchar.\n> >\n> >\n> >>\n> >> Were you using MyISAM or InnoDB on MySQL?\n> >\n> >\n> > I am using MyISAM tables in MySQL.\n>\n> Well that explains a lot. MyISAM is not transaction or crash safe.\n> On a machine with decent hardware (i.e. it doesn't lie about fsync)\n> you can pull the plugs out the back of your postgresql server and any\n> committed transactions will still be there. Your myisam tables in\n> mysql will be corrupted and data may or may not be there that you\n> inserted.\n>\n> MyISAM is great if your data is easily reproduceable or not that\n> important. If it's important etc then it's not such a great choice.\n>\n> Because of the overhead of being transactionally safe, postgresql\n> actually writes everything twice, once to a write ahead log, and then\n> flushed out to the actual tables. It is quite likely that at your\n> very high write rate you have a LOT of transactional logs.\n>\n> >> If you back up the database & restore clean, what is the size comparison\n> >> of the database filed on the restored copy to the existing one?\n> >\n> >\n> > I don't take backup and restore.\n>\n> That's not the question. What David is wondering is if you have a lot\n> of table bloat, for instance from a lot of updates or deletes.\n> PostgreSQL uses an in-store MVCC system that can bloat your tables\n> with a lot of deletes / updates happening at once or really fast. So\n> it's more of a troubleshooting suggestion. I'm guessing that since\n> you don't backup your data it's not that important, so mysql with\n> myisam may be a better choice in some ways.\n>\n> OTOH if you need to run complex reporting queries, MySQL's query\n> planner is dumb as a stump and will likely run very poorly or be\n> missing features postgresql has like CTEs and what not. Trade off,\n> neither db is perfect for everything, but know that complex queries in\n> mysql can often take many orders of magnitude longer than in pgsql.\n>\n> >> Is there any period where you could try a full vacuum?\n> >\n> >\n> > Since my app only doing inserts and drops(no delete), I believe the\n> vacuum\n> > will not give any advantage. So I have the below configuration in my\n> > database. Event the updates only performed in a very small table which\n> has 5\n> > int + 1 small int + 1 real fields.\n>\n> Ahhh but updates are the basically delete / inserts in disguise, so if\n> there's enough, then yes, vacuum full would make a difference.\n>\n> Basically the difference you are seeing is the difference between a\n> database (postgresql) and a data store (mysql + myisam). I wonder\n> what you'd see if you tried mysql with innodb tables, which are\n> transaction and crash safe like postgresql. I'm guessing there would\n> be something a bit closer to parity there.\n>\n\nDear Scott Marlowe,Thanks for the details. As you said, MySQL with MyISAM is better choice for my app. Because I don't need transaction/backup. May be I'll try with InnoDB and find the disk write/space difference. Is there any similar methods available in postgresql like MyISAM engine ?\n>>> Ahhh but updates are the basically delete / inserts in disguise, so if there's enough, then yes, vacuum full would make a difference.The table which get update has very less data ie, only has 900 rows. Out of 10500 tables, only one table is getting update frequently. Is there any way to vacuum a specific table instead of whole database ?\nThanks,RameshOn Thu, Aug 16, 2012 at 10:09 AM, Scott Marlowe <[email protected]> wrote:\nPlease use plain text on the list, some folks don't have mail readers\nthat can handle html easily.\n\nOn Wed, Aug 15, 2012 at 10:30 PM, J Ramesh Kumar <[email protected]> wrote:\n>\n> Hi David Barton,\n>\n> Please find the information below.\n>\n>> Are you able to provide a table schema?\n>\n>\n> There are 109 different types of table. I am maintaining some tables are\n> daily tables and some tables are ID based. So totally we have created around\n> 350 tables and dropped around 350 tables. I will drop the old table and I\n> don't delete any records. I am maintaing only last 30 days tables. I dropped\n> tables which are older than 30 days. All the tables are only have basic data\n> types like int, smallint, bigint, varchar.\n>\n>\n>>\n>> Were you using MyISAM or InnoDB on MySQL?\n>\n>\n> I am using MyISAM tables in MySQL.\n\nWell that explains a lot. MyISAM is not transaction or crash safe.\nOn a machine with decent hardware (i.e. it doesn't lie about fsync)\nyou can pull the plugs out the back of your postgresql server and any\ncommitted transactions will still be there. Your myisam tables in\nmysql will be corrupted and data may or may not be there that you\ninserted.\n\nMyISAM is great if your data is easily reproduceable or not that\nimportant. If it's important etc then it's not such a great choice.\n\nBecause of the overhead of being transactionally safe, postgresql\nactually writes everything twice, once to a write ahead log, and then\nflushed out to the actual tables. It is quite likely that at your\nvery high write rate you have a LOT of transactional logs.\n\n>> If you back up the database & restore clean, what is the size comparison\n>> of the database filed on the restored copy to the existing one?\n>\n>\n> I don't take backup and restore.\n\nThat's not the question. What David is wondering is if you have a lot\nof table bloat, for instance from a lot of updates or deletes.\nPostgreSQL uses an in-store MVCC system that can bloat your tables\nwith a lot of deletes / updates happening at once or really fast. So\nit's more of a troubleshooting suggestion. I'm guessing that since\nyou don't backup your data it's not that important, so mysql with\nmyisam may be a better choice in some ways.\n\nOTOH if you need to run complex reporting queries, MySQL's query\nplanner is dumb as a stump and will likely run very poorly or be\nmissing features postgresql has like CTEs and what not. Trade off,\nneither db is perfect for everything, but know that complex queries in\nmysql can often take many orders of magnitude longer than in pgsql.\n\n>> Is there any period where you could try a full vacuum?\n>\n>\n> Since my app only doing inserts and drops(no delete), I believe the vacuum\n> will not give any advantage. So I have the below configuration in my\n> database. Event the updates only performed in a very small table which has 5\n> int + 1 small int + 1 real fields.\n\nAhhh but updates are the basically delete / inserts in disguise, so if\nthere's enough, then yes, vacuum full would make a difference.\n\nBasically the difference you are seeing is the difference between a\ndatabase (postgresql) and a data store (mysql + myisam). I wonder\nwhat you'd see if you tried mysql with innodb tables, which are\ntransaction and crash safe like postgresql. I'm guessing there would\nbe something a bit closer to parity there.",
"msg_date": "Thu, 16 Aug 2012 11:10:19 +0530",
"msg_from": "J Ramesh Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "On Thu, Aug 16, 2012 at 1:30 AM, J Ramesh Kumar <[email protected]>wrote:\n\n> # To avoid freqent autovacuum\n> autovacuum_freeze_max_age = 2000000000\n> vacuum_freeze_min_age = 10000000\n> vacuum_freeze_table_age = 150000000\n>\n\nIn general, I'm no expert, but I've heard, increasing freeze_max_age isn't\nwise. It's there to be decreased, and the risk is data corruption.\n\nYou should check PG's docs to be sure, but I think the default is usually\nsafe and fast enough.\n\nAnd, if you have updates (anywhere), avoiding autovacuum may not be a good\nidea either. Autovacuum won't bother you on tables you don't update, so I\nthink you're optimizing prematurely here. If you're worrying about it, just\nincrease its naptime.\n\nYou'll most definitely need to vacuum pg's catalog with that many (and\nregular) schema changes, and autovacuum also takes care of that.\n\nYou may also want to set asynchronous_commits, to better match MyISAM's\ncharacteristics. Or even, just for benchmarking, fsync=off (I wouldn't do\nit in production though).\n\nAnyway, seeing the schema of at least one of the biggest growing tables\nwould probably help figuring out why the disk usage growth. Index bloat\ncomes to mind.\n\n\nOn Thu, Aug 16, 2012 at 1:30 AM, J Ramesh Kumar <[email protected]>wrote:\n\n> What are your indexes? Is the size in the indexes or the database tables?\n>\n>\n> The size I mentioned is the total folder size of the data directory. There\n> is no difference in the database schema / index between MySQL and\n> PostgreSQL.\n\n\nYou have a problem right there. Postgres and Mysql are completely different\nbeasts, you *will* need to tailor indices specifically for each of them.\nYou'll find, probably, many indices you needed in MySQL are no longer\nneeded with postgres (because it has a much more sophisticated planner).\n\nOn Thu, Aug 16, 2012 at 1:30 AM, J Ramesh Kumar <[email protected]> wrote:\n# To avoid freqent autovacuumautovacuum_freeze_max_age = 2000000000vacuum_freeze_min_age = 10000000vacuum_freeze_table_age = 150000000\nIn general, I'm no expert, but I've heard, increasing freeze_max_age \nisn't wise. It's there to be decreased, and the risk is data corruption.\n\nYou should check PG's docs to be sure, but I think the default is usually safe and fast enough.\n\nAnd, if you have updates (anywhere), avoiding autovacuum may not be a \ngood idea either. Autovacuum won't bother you on tables you don't \nupdate, so I think you're optimizing prematurely here. If you're \nworrying about it, just increase its naptime.\n\nYou'll most definitely need to vacuum pg's catalog with that many (and \nregular) schema changes, and autovacuum also takes care of that.\n\nYou may also want to set asynchronous_commits, to better match MyISAM's \ncharacteristics. Or even, just for benchmarking, fsync=off (I wouldn't \ndo it in production though).\n\nAnyway, seeing the schema of at least one of the biggest growing tables \nwould probably help figuring out why the disk usage growth. Index bloat \ncomes to mind.On Thu, Aug 16, 2012 at 1:30 AM, J Ramesh Kumar <[email protected]> wrote:\n\nWhat are your indexes? Is the size in the indexes or the database tables?\nThe size I mentioned is the total folder size of the data directory. There is no difference in the database schema / index between MySQL and PostgreSQL.\nYou have a problem right there. Postgres and Mysql are completely different beasts, you *will* need to tailor indices specifically for each of them. You'll find, probably, many indices you needed in MySQL are no longer needed with postgres (because it has a much more sophisticated planner).",
"msg_date": "Thu, 16 Aug 2012 02:41:38 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "Hi,\n\nOn 16 August 2012 15:40, J Ramesh Kumar <[email protected]> wrote:\n> As you said, MySQL with MyISAM is better choice for my app. Because I don't\n> need transaction/backup. May be I'll try with InnoDB and find the disk\n> write/space difference. Is there any similar methods available in postgresql\n> like MyISAM engine ?\n\nYou can try unlogged tables:\nhttp://www.postgresql.org/docs/9.1/static/sql-createtable.html\n\nIf specified, the table is created as an unlogged table. Data written\nto unlogged tables is not written to the write-ahead log (see Chapter\n29), which makes them considerably faster than ordinary tables.\nHowever, they are not crash-safe: an unlogged table is automatically\ntruncated after a crash or unclean shutdown. The contents of an\nunlogged table are also not replicated to standby servers. Any indexes\ncreated on an unlogged table are automatically unlogged as well;\nhowever, unlogged GiST indexes are currently not supported and cannot\nbe created on an unlogged table.\n\n>\n>>>> Ahhh but updates are the basically delete / inserts in disguise, so if\n>>>> there's enough, then yes, vacuum full would make a difference.\n>\n> The table which get update has very less data ie, only has 900 rows. Out of\n> 10500 tables, only one table is getting update frequently. Is there any way\n> to vacuum a specific table instead of whole database ?\n\nYou can run \"vacuum <table name>\" but I doubt if that makes sense to\nrun it manually when you have 1500 tx / sec. Postgres has HOT updates\nwhich have high change to reuse existing space:\n\n From 8.3 release notes:\nHeap-Only Tuples (HOT) accelerate space reuse for most UPDATEs and\nDELETEs (Pavan Deolasee, with ideas from many others)\nUPDATEs and DELETEs leave dead tuples behind, as do failed INSERTs.\nPreviously only VACUUM could reclaim space taken by dead tuples. With\nHOT dead tuple space can be automatically reclaimed at the time of\nINSERT or UPDATE if no changes are made to indexed columns. This\nallows for more consistent performance. Also, HOT avoids adding\nduplicate index entries.\n\n-- \nOndrej Ivanic\n([email protected])\n\n",
"msg_date": "Thu, 16 Aug 2012 15:48:57 +1000",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "On Thu, Aug 16, 2012 at 2:40 AM, J Ramesh Kumar <[email protected]> wrote:\n>>>> Ahhh but updates are the basically delete / inserts in disguise, so if\n>>>> there's enough, then yes, vacuum full would make a difference.\n>\n> The table which get update has very less data ie, only has 900 rows. Out of\n> 10500 tables, only one table is getting update frequently. Is there any way\n> to vacuum a specific table instead of whole database ?\n\nJust let autovacuum figure it out. It's smart enough not to touch\ninsert-only tables last I checked, and you can set I/O limits to make\nsure it doesn't interfere.\n\nIf you don't care about possible data corruption if the system\ncrashes, you can set fsync=off and get many of the performance\nbenefits. But you don't have ways to reduce disk usage other than\ndropping indices (and you may have unused indices, do check their\nstatistics), and making sure autovacuum is running where it's needed.\n\nA backup/restore or a vacuum full + reindex will get rid of all bloat.\nIf your DB size goes down considerably after that, you have bloat. If\nnot, you don't. You can even do that with a single (old) table to\ncheck it out.\n\n",
"msg_date": "Thu, 16 Aug 2012 02:50:46 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "On Thu, Aug 16, 2012 at 03:48:57PM +1000, Ondrej Ivanič wrote:\n> Hi,\n> \n> On 16 August 2012 15:40, J Ramesh Kumar <[email protected]> wrote:\n> > As you said, MySQL with MyISAM is better choice for my app. Because I don't\n> > need transaction/backup. May be I'll try with InnoDB and find the disk\n> > write/space difference. Is there any similar methods available in postgresql\n> > like MyISAM engine ?\n> \n> You can try unlogged tables:\n> http://www.postgresql.org/docs/9.1/static/sql-createtable.html\n> \n> If specified, the table is created as an unlogged table. Data written\n> to unlogged tables is not written to the write-ahead log (see Chapter\n> 29), which makes them considerably faster than ordinary tables.\n> However, they are not crash-safe: an unlogged table is automatically\n> truncated after a crash or unclean shutdown. The contents of an\n> unlogged table are also not replicated to standby servers. Any indexes\n> created on an unlogged table are automatically unlogged as well;\n> however, unlogged GiST indexes are currently not supported and cannot\n> be created on an unlogged table.\n\nI would set full_page_writes = off too.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Thu, 16 Aug 2012 10:53:21 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "On Thu, Aug 16, 2012 at 10:53:21AM -0400, Bruce Momjian wrote:\n> On Thu, Aug 16, 2012 at 03:48:57PM +1000, Ondrej Ivanič wrote:\n> > Hi,\n> > \n> > On 16 August 2012 15:40, J Ramesh Kumar <[email protected]> wrote:\n> > > As you said, MySQL with MyISAM is better choice for my app. Because I don't\n> > > need transaction/backup. May be I'll try with InnoDB and find the disk\n> > > write/space difference. Is there any similar methods available in postgresql\n> > > like MyISAM engine ?\n> > \n> > You can try unlogged tables:\n> > http://www.postgresql.org/docs/9.1/static/sql-createtable.html\n> > \n> > If specified, the table is created as an unlogged table. Data written\n> > to unlogged tables is not written to the write-ahead log (see Chapter\n> > 29), which makes them considerably faster than ordinary tables.\n> > However, they are not crash-safe: an unlogged table is automatically\n> > truncated after a crash or unclean shutdown. The contents of an\n> > unlogged table are also not replicated to standby servers. Any indexes\n> > created on an unlogged table are automatically unlogged as well;\n> > however, unlogged GiST indexes are currently not supported and cannot\n> > be created on an unlogged table.\n> \n> I would set full_page_writes = off too.\n\nBetter yet, read our documentation about non-durable settting:\n\n\thttp://www.postgresql.org/docs/9.1/static/non-durability.html\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Thu, 16 Aug 2012 10:56:21 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "On Wed, Aug 15, 2012 at 11:40 PM, J Ramesh Kumar <[email protected]> wrote:\n> Dear Scott Marlowe,\n>\n> Thanks for the details.\n>\n> As you said, MySQL with MyISAM is better choice for my app. Because I don't\n> need transaction/backup.\n\nThat's not exactly what I said. Remember that if you need to run\ncomplex queries postgresql is still likely the better candidate.\n\n> May be I'll try with InnoDB and find the disk\n> write/space difference. Is there any similar methods available in postgresql\n> like MyISAM engine ?\n\nUnlogged tables as mentioned by others.\n\n",
"msg_date": "Thu, 16 Aug 2012 09:49:53 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "\n\nBruce Momjian <[email protected]> schrieb:\n\n>On Thu, Aug 16, 2012 at 03:48:57PM +1000, Ondrej Ivanič wrote:\n>> Hi,\n>> \n>> On 16 August 2012 15:40, J Ramesh Kumar <[email protected]>\n>wrote:\n>> > As you said, MySQL with MyISAM is better choice for my app. Because\n>I don't\n>> > need transaction/backup. May be I'll try with InnoDB and find the\n>disk\n>> > write/space difference. Is there any similar methods available in\n>postgresql\n>> > like MyISAM engine ?\n>> \n>> You can try unlogged tables:\n>> http://www.postgresql.org/docs/9.1/static/sql-createtable.html\n>> \n>> If specified, the table is created as an unlogged table. Data written\n>> to unlogged tables is not written to the write-ahead log (see Chapter\n>> 29), which makes them considerably faster than ordinary tables.\n>> However, they are not crash-safe: an unlogged table is automatically\n>> truncated after a crash or unclean shutdown. The contents of an\n>> unlogged table are also not replicated to standby servers. Any\n>indexes\n>> created on an unlogged table are automatically unlogged as well;\n>> however, unlogged GiST indexes are currently not supported and cannot\n>> be created on an unlogged table.\n>\n>I would set full_page_writes = off too.\nWhy? There shouldn't be any such writes on unlogged tables.\n\nAndres\n\nPlease excuse the brevity and formatting - I am writing this on my mobile phone.\n\n",
"msg_date": "Thu, 16 Aug 2012 18:07:26 +0200",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "On Thu, Aug 16, 2012 at 06:07:26PM +0200, [email protected] wrote:\n> \n> \n> Bruce Momjian <[email protected]> schrieb:\n> \n> >On Thu, Aug 16, 2012 at 03:48:57PM +1000, Ondrej Ivanič wrote:\n> >> Hi,\n> >> \n> >> On 16 August 2012 15:40, J Ramesh Kumar <[email protected]>\n> >wrote:\n> >> > As you said, MySQL with MyISAM is better choice for my app. Because\n> >I don't\n> >> > need transaction/backup. May be I'll try with InnoDB and find the\n> >disk\n> >> > write/space difference. Is there any similar methods available in\n> >postgresql\n> >> > like MyISAM engine ?\n> >> \n> >> You can try unlogged tables:\n> >> http://www.postgresql.org/docs/9.1/static/sql-createtable.html\n> >> \n> >> If specified, the table is created as an unlogged table. Data written\n> >> to unlogged tables is not written to the write-ahead log (see Chapter\n> >> 29), which makes them considerably faster than ordinary tables.\n> >> However, they are not crash-safe: an unlogged table is automatically\n> >> truncated after a crash or unclean shutdown. The contents of an\n> >> unlogged table are also not replicated to standby servers. Any\n> >indexes\n> >> created on an unlogged table are automatically unlogged as well;\n> >> however, unlogged GiST indexes are currently not supported and cannot\n> >> be created on an unlogged table.\n> >\n> >I would set full_page_writes = off too.\n> Why? There shouldn't be any such writes on unlogged tables.\n\nTrue. I was thinking more of the logged tables, and the system tables.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n",
"msg_date": "Thu, 16 Aug 2012 12:23:03 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
},
{
"msg_contents": "On Wed, Aug 15, 2012 at 11:30 PM, J Ramesh Kumar <[email protected]> wrote:\n>\n> Hi David Barton,\n>\n> Please find the information below.\n>\n>> Are you able to provide a table schema?\n>\n>\n> There are 109 different types of table. I am maintaining some tables are\n> daily tables and some tables are ID based. So totally we have created around\n> 350 tables and dropped around 350 tables. I will drop the old table and I\n> don't delete any records. I am maintaing only last 30 days tables. I dropped\n> tables which are older than 30 days. All the tables are only have basic data\n> types like int, smallint, bigint, varchar.\n>\n>\n>>\n>> Were you using MyISAM or InnoDB on MySQL?\n>\n>\n> I am using MyISAM tables in MySQL.\n\nYou can't compare a non-MVCC system such as MyISAM with a MVCC one.\nMVCC systems have to store extra accounting information in order to\nmanage transactions and multiple versions of the same record for SQL\nupdates. MVCC isn't all bad: for example you get much better\nperformance in the face of highly concurrent activity. MyISAM does\nfull table locks which are not scalable at all. The penalty for MVCC\nstorage may in some cases seem quite high if your tables have very\nnarrow records.\n\nBTW, I am suspicious that your claim that you 'don't need'\ntransactions is correct, especially in the long term.\n\nAnyways, there are several techniques to try and mitigate data growth\nin postgres -- arrays for example.\n\nmerlin\n\n",
"msg_date": "Thu, 16 Aug 2012 16:16:59 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Disk write and space taken by PostgreSQL"
}
] |
[
{
"msg_contents": "Hi, \n\nif I have a table that daily at night is deleted about 8 millions of rows\n(table maybe has 9 millions) is recommended to do a vacuum analyze after\ndelete completes or can I leave this job to autovacuum?\n\n \n\nThis table is very active during the day but less active during night\n\n \n\nI think that the only only thing where Postgres is weak, is in this area\n(table and index bloat).\n\n \n\nFor some reason for the same amount of data every day postgres consume a\nlittle more.\n\n \n\nThanks!\n\n\nHi, if I have a table that daily at night is deleted about 8 millions of rows (table maybe has 9 millions) is recommended to do a vacuum analyze after delete completes or can I leave this job to autovacuum? This table is very active during the day but less active during night I think that the only only thing where Postgres is weak, is in this area (table and index bloat). For some reason for the same amount of data every day postgres consume a little more. Thanks!",
"msg_date": "Thu, 16 Aug 2012 16:33:56 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "best practice to avoid table bloat?"
},
{
"msg_contents": "\nOn 08/16/2012 04:33 PM, Anibal David Acosta wrote:\n>\n> Hi,\n>\n> if I have a table that daily at night is deleted about 8 millions of \n> rows (table maybe has 9 millions) is recommended to do a vacuum \n> analyze after delete completes or can I leave this job to autovacuum?\n>\n> This table is very active during the day but less active during night\n>\n> I think that the only only thing where Postgres is weak, is in this \n> area (table and index bloat).\n>\n> For some reason for the same amount of data every day postgres consume \n> a little more.\n>\n>\n\n\nCheck out pg_reorg.\n\ncheers\n\nandrew\n\n",
"msg_date": "Thu, 16 Aug 2012 16:48:29 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best practice to avoid table bloat?"
},
{
"msg_contents": "\"Anibal David Acosta\" <[email protected]> wrote:\n \n> if I have a table that daily at night is deleted about 8 millions\n> of rows (table maybe has 9 millions) is recommended to do a vacuum\n> analyze after delete completes or can I leave this job to\n> autovacuum?\n \nDeleting a high percentage of the rows should cause autovacuum to\ndeal with the table the next time it wakes up, so an explicit VACUUM\nANALYZE shouldn't be needed.\n \n> For some reason for the same amount of data every day postgres\n> consume a little more.\n \nHow are you measuring the data and how are you measuring the space? \nAnd what version of PostgreSQL is this?\n \n-Kevin\n\n",
"msg_date": "Thu, 16 Aug 2012 15:52:23 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best practice to avoid table bloat?"
},
{
"msg_contents": "Thanks Kevin.\nPostgres version is 9.1.4 (lastest)\n\nEvery day the table has about 7 millions of new rows.\nThe table hold the data for 60 days, so approx. the total rows must be\naround 420 millions.\nEvery night a delete process run, and remove rows older than 60 days.\n\nSo, the space used by postgres should not be increase drastically because\nevery day arrive 7 millions of rows but also same quantity is deleted but my\ndisk get out of space every 4 months.\nI must copy tables outside the server, delete local table and create it\nagain, after this process I got again space for about 4 months.\n\nMaybe is a wrong autovacuum config, but is really complicate to understand\nwhat values are correct to avoid performance penalty but to keep table in\ngood fit.\n\nI think that autovacuum configuration should have some like \"auto-config\"\nthat recalculate every day which is the best configuration for the server\ncondition\n\nThanks!\n\n\n-----Mensaje original-----\nDe: Kevin Grittner [mailto:[email protected]] \nEnviado el: jueves, 16 de agosto de 2012 04:52 p.m.\nPara: Anibal David Acosta; [email protected]\nAsunto: Re: [PERFORM] best practice to avoid table bloat?\n\n\"Anibal David Acosta\" <[email protected]> wrote:\n \n> if I have a table that daily at night is deleted about 8 millions of \n> rows (table maybe has 9 millions) is recommended to do a vacuum \n> analyze after delete completes or can I leave this job to autovacuum?\n \nDeleting a high percentage of the rows should cause autovacuum to deal with\nthe table the next time it wakes up, so an explicit VACUUM ANALYZE shouldn't\nbe needed.\n \n> For some reason for the same amount of data every day postgres consume \n> a little more.\n \nHow are you measuring the data and how are you measuring the space? \nAnd what version of PostgreSQL is this?\n \n-Kevin\n\n\n",
"msg_date": "Thu, 16 Aug 2012 17:10:31 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best practice to avoid table bloat?"
},
{
"msg_contents": "[please don't top-post]\n\n\"Anibal David Acosta\" <[email protected]> wrote:\n> Kevin Grittner <[email protected]> wrote:\n>> \"Anibal David Acosta\" <[email protected]> wrote:\n>> \n>>> if I have a table that daily at night is deleted about 8\n>>> millions of rows (table maybe has 9 millions) is recommended to\n>>> do a vacuum analyze after delete completes or can I leave this\n>>> job to autovacuum?\n>> \n>> Deleting a high percentage of the rows should cause autovacuum to\n>> deal with the table the next time it wakes up, so an explicit\n>> VACUUM ANALYZE shouldn't be needed.\n \n> Every day the table has about 7 millions of new rows.\n> The table hold the data for 60 days, so approx. the total rows\n> must be around 420 millions.\n> Every night a delete process run, and remove rows older than 60\n> days.\n \nOh, I thought you were saying the table grew to 9 million rows each\nday and you deleted 8 million of them each night. That would\ndefinitely trigger autovacuum. Deleting 7 million rows from a table\nof 420 million rows would not, so an explicit VACUUM ANALYZE after\nthe delete might be helpful. Even better, with a pattern like that,\nyou might want to consider partitioning the table:\n \nhttp://www.postgresql.org/docs/9.1/static/ddl-partitioning.html\n \n>>> For some reason for the same amount of data every day postgres\n>>> consume a little more.\n>> \n>> How are you measuring the data and how are you measuring the\n>> space?\n \n> [no answer]\n \nWithout knowing what is increasing, it's hard to say why it is\nincreasing. For all we know you are logging all statements and\nnever deleting log files. The solution for that would be entirely\ndifferent from the solution for some other problem.\n \n> So, the space used by postgres should not be increase drastically\n> because every day arrive 7 millions of rows but also same quantity\n> is deleted but my disk get out of space every 4 months.\n \nWhat is getting bigger over time?\n \n> I must copy tables outside the server, delete local table and\n> create it again, after this process I got again space for about 4\n> months.\n \nHow do you do that? pg_dump, DROP TABLE, restore the dump? Have\nyou captured sizes of heap, toast, indexes, etc. before and after\nthis aggressive maintenance? Is the size going up by orders of\nmagnitude, or are you running really tight and getting killed by a\n10% increase. We don't know unless you tell us.\n \n> Maybe is a wrong autovacuum config, but is really complicate to\n> understand what values are correct to avoid performance penalty\n> but to keep table in good fit.\n \nPlease show us the entire result from running this query:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \n-Kevin\n\n",
"msg_date": "Thu, 16 Aug 2012 16:39:19 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best practice to avoid table bloat?"
}
] |
[
{
"msg_contents": "Is seq.setval() \"non transactional\" in the same sense as seq.nextval()\nis? More specifically, suppose I sometimes want to get IDs one-by-one\nusing nextval(), but sometimes I want a block of a thousand IDs. To\nget the latter, I want to do this:\n\n select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n\nNow suppose two processes do this simultaneously. Maybe they're in\ntransactions, maybe they're not. Are they guaranteed to get distinct\nblocks of IDs? Or is it possible that each will execute nextval() and\nget N and N+1 respectively, and then do setval() to N+1000 and N+1001,\nresulting in two overlapping blocks.\n\nIf the answer is, \"This won't work,\" then what's a better way to do this?\n\nThanks,\nCraig\n\n",
"msg_date": "Mon, 20 Aug 2012 16:32:27 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> I want to do this:\n\n> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n\n> Now suppose two processes do this simultaneously. Maybe they're in\n> transactions, maybe they're not. Are they guaranteed to get distinct\n> blocks of IDs?\n\nNo, because the setval and the nextval are not indivisible.\n\n> Or is it possible that each will execute nextval() and\n> get N and N+1 respectively, and then do setval() to N+1000 and N+1001,\n> resulting in two overlapping blocks.\n\nExactly.\n\n> If the answer is, \"This won't work,\" then what's a better way to do this?\n\nAFAIK the only way at the moment is\n\n* acquire some advisory lock that by convention you use for this sequence\n* advance the sequence\n* release advisory lock\n\nThere have been previous discussions of this type of problem, eg\nhttp://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php\nbut the topic doesn't seem to have come up quite often enough to\nmotivate anybody to do anything about it. Your particular case could be\nhandled by a variant of nextval() with a number-of-times-to-advance\nargument, but I'm not sure if that's enough for other scenarios.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 20 Aug 2012 20:10:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane <[email protected]> wrote:\n> Craig James <[email protected]> writes:\n>> I want to do this:\n>\n>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n>\n>> Now suppose two processes do this simultaneously. Maybe they're in\n>> transactions, maybe they're not. Are they guaranteed to get distinct\n>> blocks of IDs?\n>\n> No, because the setval and the nextval are not indivisible.\n>\n>> Or is it possible that each will execute nextval() and\n>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001,\n>> resulting in two overlapping blocks.\n>\n> Exactly.\n>\n>> If the answer is, \"This won't work,\" then what's a better way to do this?\n>\n> AFAIK the only way at the moment is\n>\n> * acquire some advisory lock that by convention you use for this sequence\n> * advance the sequence\n> * release advisory lock\n>\n> There have been previous discussions of this type of problem, eg\n> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php\n> but the topic doesn't seem to have come up quite often enough to\n> motivate anybody to do anything about it. Your particular case could be\n> handled by a variant of nextval() with a number-of-times-to-advance\n> argument, but I'm not sure if that's enough for other scenarios.\n\nIf the OP could live with large gaps in his sequence, he could set it\nto advance by say 1000 at a time, and then use the numbers in that gap\nfreely. Just a thought.\n\n",
"msg_date": "Mon, 20 Aug 2012 18:59:43 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Mon, Aug 20, 2012 at 6:59 PM, Scott Marlowe <[email protected]> wrote:\n> On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane <[email protected]> wrote:\n>> Craig James <[email protected]> writes:\n>>> I want to do this:\n>>\n>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n>>\n>>> Now suppose two processes do this simultaneously. Maybe they're in\n>>> transactions, maybe they're not. Are they guaranteed to get distinct\n>>> blocks of IDs?\n>>\n>> No, because the setval and the nextval are not indivisible.\n>>\n>>> Or is it possible that each will execute nextval() and\n>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001,\n>>> resulting in two overlapping blocks.\n>>\n>> Exactly.\n>>\n>>> If the answer is, \"This won't work,\" then what's a better way to do this?\n>>\n>> AFAIK the only way at the moment is\n>>\n>> * acquire some advisory lock that by convention you use for this sequence\n>> * advance the sequence\n>> * release advisory lock\n>>\n>> There have been previous discussions of this type of problem, eg\n>> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php\n>> but the topic doesn't seem to have come up quite often enough to\n>> motivate anybody to do anything about it. Your particular case could be\n>> handled by a variant of nextval() with a number-of-times-to-advance\n>> argument, but I'm not sure if that's enough for other scenarios.\n>\n> If the OP could live with large gaps in his sequence, he could set it\n> to advance by say 1000 at a time, and then use the numbers in that gap\n> freely. Just a thought.\n\nBetter yet set cache = 1000; here's an example:\n\ncreate sequence a cache 1000;\nT1: select nextval('a');\n1\nT2: select nextval('a');\n1001\nT1: select nextval('a');\n2\nT2: select nextval('a');\n1002\n\nand so on.\n\nNow can he just select nextval('a'); 1000 times in a loop? Or would\nhe prefer another method.\n\nI guess I'm kind of wondering which problem he's trying to solve.\n\n",
"msg_date": "Mon, 20 Aug 2012 19:06:25 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Mon, Aug 20, 2012 at 7:06 PM, Scott Marlowe <[email protected]> wrote:\n> On Mon, Aug 20, 2012 at 6:59 PM, Scott Marlowe <[email protected]> wrote:\n>> On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane <[email protected]> wrote:\n>>> Craig James <[email protected]> writes:\n>>>> I want to do this:\n>>>\n>>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n>>>\n>>>> Now suppose two processes do this simultaneously. Maybe they're in\n>>>> transactions, maybe they're not. Are they guaranteed to get distinct\n>>>> blocks of IDs?\n>>>\n>>> No, because the setval and the nextval are not indivisible.\n>>>\n>>>> Or is it possible that each will execute nextval() and\n>>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001,\n>>>> resulting in two overlapping blocks.\n>>>\n>>> Exactly.\n>>>\n>>>> If the answer is, \"This won't work,\" then what's a better way to do this?\n>>>\n>>> AFAIK the only way at the moment is\n>>>\n>>> * acquire some advisory lock that by convention you use for this sequence\n>>> * advance the sequence\n>>> * release advisory lock\n>>>\n>>> There have been previous discussions of this type of problem, eg\n>>> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php\n>>> but the topic doesn't seem to have come up quite often enough to\n>>> motivate anybody to do anything about it. Your particular case could be\n>>> handled by a variant of nextval() with a number-of-times-to-advance\n>>> argument, but I'm not sure if that's enough for other scenarios.\n>>\n>> If the OP could live with large gaps in his sequence, he could set it\n>> to advance by say 1000 at a time, and then use the numbers in that gap\n>> freely. Just a thought.\n>\n> Better yet set cache = 1000; here's an example:\n>\n> create sequence a cache 1000;\n> T1: select nextval('a');\n> 1\n> T2: select nextval('a');\n> 1001\n> T1: select nextval('a');\n> 2\n> T2: select nextval('a');\n> 1002\n>\n> and so on.\n>\n> Now can he just select nextval('a'); 1000 times in a loop? Or would\n> he prefer another method.\n>\n> I guess I'm kind of wondering which problem he's trying to solve.\n\nMade a sequence:\ncreate sequence a;\n\nthen ran a one line\nselect nextval('a');\nagainst it 1000 times from bash, i.e. the worst vase performance scenario:\n\ntime for ((i=0;i<1000;i++));do psql -f t1 > /dev/null;done\n\nreal\t1m1.978s\nuser\t0m41.999s\nsys\t0m12.277s\n\nthen I ran it a singe time on a file with 1000 select nextvals:\n\ntime psql -f t1000 > /dev/null\n\nreal\t0m0.486s\nuser\t0m0.112s\nsys\t0m0.036s\n\nThen I recreated sequence a:\n\ncreate sequence a cache 1000;\n\nand ran it again:\n\ntime psql -f t1000 > /dev/null\n\nreal\t0m0.293s\nuser\t0m0.120s\nsys\t0m0.024s\n\nI'd imagine in a real programming oangua\n\n",
"msg_date": "Tue, 21 Aug 2012 01:41:16 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Tue, Aug 21, 2012 at 1:41 AM, Scott Marlowe <[email protected]> wrote:\n> On Mon, Aug 20, 2012 at 7:06 PM, Scott Marlowe <[email protected]> wrote:\n>> On Mon, Aug 20, 2012 at 6:59 PM, Scott Marlowe <[email protected]> wrote:\n>>> On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane <[email protected]> wrote:\n>>>> Craig James <[email protected]> writes:\n>>>>> I want to do this:\n>>>>\n>>>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n>>>>\n>>>>> Now suppose two processes do this simultaneously. Maybe they're in\n>>>>> transactions, maybe they're not. Are they guaranteed to get distinct\n>>>>> blocks of IDs?\n>>>>\n>>>> No, because the setval and the nextval are not indivisible.\n>>>>\n>>>>> Or is it possible that each will execute nextval() and\n>>>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001,\n>>>>> resulting in two overlapping blocks.\n>>>>\n>>>> Exactly.\n>>>>\n>>>>> If the answer is, \"This won't work,\" then what's a better way to do this?\n>>>>\n>>>> AFAIK the only way at the moment is\n>>>>\n>>>> * acquire some advisory lock that by convention you use for this sequence\n>>>> * advance the sequence\n>>>> * release advisory lock\n>>>>\n>>>> There have been previous discussions of this type of problem, eg\n>>>> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php\n>>>> but the topic doesn't seem to have come up quite often enough to\n>>>> motivate anybody to do anything about it. Your particular case could be\n>>>> handled by a variant of nextval() with a number-of-times-to-advance\n>>>> argument, but I'm not sure if that's enough for other scenarios.\n>>>\n>>> If the OP could live with large gaps in his sequence, he could set it\n>>> to advance by say 1000 at a time, and then use the numbers in that gap\n>>> freely. Just a thought.\n>>\n>> Better yet set cache = 1000; here's an example:\n>>\n>> create sequence a cache 1000;\n>> T1: select nextval('a');\n>> 1\n>> T2: select nextval('a');\n>> 1001\n>> T1: select nextval('a');\n>> 2\n>> T2: select nextval('a');\n>> 1002\n>>\n>> and so on.\n>>\n>> Now can he just select nextval('a'); 1000 times in a loop? Or would\n>> he prefer another method.\n>>\n>> I guess I'm kind of wondering which problem he's trying to solve.\n>\n> Made a sequence:\n> create sequence a;\n>\n> then ran a one line\n> select nextval('a');\n> against it 1000 times from bash, i.e. the worst vase performance scenario:\n>\n> time for ((i=0;i<1000;i++));do psql -f t1 > /dev/null;done\n>\n> real 1m1.978s\n> user 0m41.999s\n> sys 0m12.277s\n>\n> then I ran it a singe time on a file with 1000 select nextvals:\n>\n> time psql -f t1000 > /dev/null\n>\n> real 0m0.486s\n> user 0m0.112s\n> sys 0m0.036s\n>\n> Then I recreated sequence a:\n>\n> create sequence a cache 1000;\n>\n> and ran it again:\n>\n> time psql -f t1000 > /dev/null\n>\n> real 0m0.293s\n> user 0m0.120s\n> sys 0m0.024s\n>\n> I'd imagine in a real programming oangua\n\nsometimes I hate my laptops touchpad. Ran something similar in php\ngot similar performance. By comparison, running select 1 instead of\nnextval() took ~0.160s to run.\n\n",
"msg_date": "Tue, 21 Aug 2012 01:45:12 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Tue, Aug 21, 2012 at 2:45 AM, Scott Marlowe <[email protected]> wrote:\n> sometimes I hate my laptops touchpad. Ran something similar in php\n> got similar performance. By comparison, running select 1 instead of\n> nextval() took ~0.160s to run.\n\nyou're mostly measuring client overhead i think:\n\npostgres=# explain analyze select nextval('s') from\ngenerate_series(1,1000); explain analyze select nextval('s') from\ngenerate_series(1,1000);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Function Scan on generate_series (cost=0.00..12.50 rows=1000\nwidth=0) (actual time=0.149..1.320 rows=1000 loops=1)\n Total runtime: 1.806 ms\n\npostgres=# do\n$$\nbegin\n for x in 1..1000 loop\n perform nextval('s');\n end loop;\nend;\n$$ language plpgsql;\nDO\nTime: 4.333 ms\n\nAnyways, the only reason to do advisory locking is if you\na) strictly need contiguous blocks of ids\nand\nb) are worried about concurrency and the id is fetched early in a\nnon-trivial transaction\n\nIf a) isn't true, it's better to do looped nextval, and if b) isn't\ntrue, IMO it's better to maintain a value in a table and let mvcc\nhandle things. Being able to grab sequences in a block without manual\nlocking would be a nice feature but only if it could be done without\nadding an iota of overhead to standard usage :-).\n\nmerlin\n\n",
"msg_date": "Tue, 21 Aug 2012 09:12:59 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Mon, Aug 20, 2012 at 6:06 PM, Scott Marlowe <[email protected]> wrote:\n> On Mon, Aug 20, 2012 at 6:59 PM, Scott Marlowe <[email protected]> wrote:\n>> On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane <[email protected]> wrote:\n>>> Craig James <[email protected]> writes:\n>>>> I want to do this:\n>>>\n>>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n>>>\n>>>> Now suppose two processes do this simultaneously. Maybe they're in\n>>>> transactions, maybe they're not. Are they guaranteed to get distinct\n>>>> blocks of IDs?\n>>>\n>>> No, because the setval and the nextval are not indivisible.\n>>>\n>>>> Or is it possible that each will execute nextval() and\n>>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001,\n>>>> resulting in two overlapping blocks.\n>>>\n>>> Exactly.\n>>>\n>>>> If the answer is, \"This won't work,\" then what's a better way to do this?\n\n--- snip ---\n\n>> If the OP could live with large gaps in his sequence, he could set it\n>> to advance by say 1000 at a time, and then use the numbers in that gap\n>> freely. Just a thought.\n>\n> Better yet set cache = 1000; here's an example:\n>\n> create sequence a cache 1000;\n> T1: select nextval('a');\n> 1\n> T2: select nextval('a');\n> 1001\n> T1: select nextval('a');\n> 2\n> T2: select nextval('a');\n> 1002\n>\n> and so on.\n>\n> Now can he just select nextval('a'); 1000 times in a loop? Or would\n> he prefer another method.\n>\n> I guess I'm kind of wondering which problem he's trying to solve.\n\nI thought of that, but I can't live with large gaps in the sequence.\nIt's used for 32-bit keys, and at a maximum rate of use we have about\n30 years before we run out of numbers. If I start using 1000-item\nblocks, we could run out in a few months even at today's usage.\nBesides which, it doesn't solve the problem, because what do I do when\nan application asks for a block of 1001 items?\n\nIt's also inefficient to call nextval() 10, 100 (or 10000, or 100000)\ntimes in a row just to get a guaranteed-unique block of identifiers.\n\nThanks,\nCraig\n\n",
"msg_date": "Tue, 21 Aug 2012 07:36:41 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Mon, Aug 20, 2012 at 5:10 PM, Tom Lane <[email protected]> wrote:\n> Craig James <[email protected]> writes:\n>> I want to do this:\n>\n>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n>\n>> Now suppose two processes do this simultaneously. Maybe they're in\n>> transactions, maybe they're not. Are they guaranteed to get distinct\n>> blocks of IDs?\n>\n> No, because the setval and the nextval are not indivisible.\n>\n>> Or is it possible that each will execute nextval() and\n>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001,\n>> resulting in two overlapping blocks.\n>\n> Exactly.\n>\n>> If the answer is, \"This won't work,\" then what's a better way to do this?\n>\n> AFAIK the only way at the moment is\n>\n> * acquire some advisory lock that by convention you use for this sequence\n> * advance the sequence\n> * release advisory lock\n>\n> There have been previous discussions of this type of problem, eg\n> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php\n> but the topic doesn't seem to have come up quite often enough to\n> motivate anybody to do anything about it. Your particular case could be\n> handled by a variant of nextval() with a number-of-times-to-advance\n> argument, but I'm not sure if that's enough for other scenarios.\n>\n> regards, tom lane\n\nSo here's what I came up with. I'm no PLPGSQL guru, but it seemed\npretty straightforward.\n\ncreate or replace function nextval_block(bsize integer default 1)\n returns bigint as $nextval_block$\n declare\n bstart bigint;\n begin\n perform pg_advisory_lock(1);\n select into bstart nextval('my_seq');\n perform setval('my_seq', bstart + bsize, false);\n perform pg_advisory_unlock(1);\n return bstart;\n end;\n$nextval_block$ language plpgsql;\n\nAs long as I ensure that every application uses nextval_block()\ninstead of nextval() to access this sequence, I think this will do\nwhat I want.\n\ntestdb=> select nextval_block();\n nextval_block\n---------------\n 1\n(1 row)\n\ntestdb=> select nextval_block();\n nextval_block\n---------------\n 2\n(1 row)\n\n\ntestdb=> select nextval_block(1000);\n nextval_block\n---------------\n 3\n(1 row)\n\ntestdb=> select nextval_block(1000);\n nextval_block\n---------------\n 1003\n(1 row)\n\ntestdb=> select nextval_block(1000);\n nextval_block\n---------------\n 2003\n(1 row)\n\nUsing pgsql's \\timing directive, it says it's roughly 0.45 msec per\nrequest with the client and server are on the same machines, and 0.55\nmsec per request when the client and server are different machines.\nNot bad.\n\nThanks for your help!\nCraig\n\n",
"msg_date": "Tue, 21 Aug 2012 08:32:47 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Tue, Aug 21, 2012 at 10:32 AM, Craig James <[email protected]> wrote:\n> On Mon, Aug 20, 2012 at 5:10 PM, Tom Lane <[email protected]> wrote:\n>> Craig James <[email protected]> writes:\n>>> I want to do this:\n>>\n>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n>>\n>>> Now suppose two processes do this simultaneously. Maybe they're in\n>>> transactions, maybe they're not. Are they guaranteed to get distinct\n>>> blocks of IDs?\n>>\n>> No, because the setval and the nextval are not indivisible.\n>>\n>>> Or is it possible that each will execute nextval() and\n>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001,\n>>> resulting in two overlapping blocks.\n>>\n>> Exactly.\n>>\n>>> If the answer is, \"This won't work,\" then what's a better way to do this?\n>>\n>> AFAIK the only way at the moment is\n>>\n>> * acquire some advisory lock that by convention you use for this sequence\n>> * advance the sequence\n>> * release advisory lock\n>>\n>> There have been previous discussions of this type of problem, eg\n>> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php\n>> but the topic doesn't seem to have come up quite often enough to\n>> motivate anybody to do anything about it. Your particular case could be\n>> handled by a variant of nextval() with a number-of-times-to-advance\n>> argument, but I'm not sure if that's enough for other scenarios.\n>>\n>> regards, tom lane\n>\n> So here's what I came up with. I'm no PLPGSQL guru, but it seemed\n> pretty straightforward.\n>\n> create or replace function nextval_block(bsize integer default 1)\n> returns bigint as $nextval_block$\n> declare\n> bstart bigint;\n> begin\n> perform pg_advisory_lock(1);\n> select into bstart nextval('my_seq');\n> perform setval('my_seq', bstart + bsize, false);\n> perform pg_advisory_unlock(1);\n> return bstart;\n> end;\n> $nextval_block$ language plpgsql;\n>\n> As long as I ensure that every application uses nextval_block()\n> instead of nextval() to access this sequence, I think this will do\n> what I want.\n>\n> testdb=> select nextval_block();\n> nextval_block\n> ---------------\n> 1\n> (1 row)\n>\n> testdb=> select nextval_block();\n> nextval_block\n> ---------------\n> 2\n> (1 row)\n>\n>\n> testdb=> select nextval_block(1000);\n> nextval_block\n> ---------------\n> 3\n> (1 row)\n>\n> testdb=> select nextval_block(1000);\n> nextval_block\n> ---------------\n> 1003\n> (1 row)\n>\n> testdb=> select nextval_block(1000);\n> nextval_block\n> ---------------\n> 2003\n> (1 row)\n>\n> Using pgsql's \\timing directive, it says it's roughly 0.45 msec per\n> request with the client and server are on the same machines, and 0.55\n> msec per request when the client and server are different machines.\n> Not bad.\n\nIf you also need to get only 1 id, in those cases you can sharelock\ninstead of full lock -- you can treat the case of blocksize=1\nspecially.\n\nmerlin\n\n",
"msg_date": "Tue, 21 Aug 2012 12:53:30 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Tue, Aug 21, 2012 at 9:32 AM, Craig James <[email protected]> wrote:\n> On Mon, Aug 20, 2012 at 5:10 PM, Tom Lane <[email protected]> wrote:\n>> Craig James <[email protected]> writes:\n>>> I want to do this:\n>>\n>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n>>\n>>> Now suppose two processes do this simultaneously. Maybe they're in\n>>> transactions, maybe they're not. Are they guaranteed to get distinct\n>>> blocks of IDs?\n>>\n>> No, because the setval and the nextval are not indivisible.\n>>\n>>> Or is it possible that each will execute nextval() and\n>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001,\n>>> resulting in two overlapping blocks.\n>>\n>> Exactly.\n>>\n>>> If the answer is, \"This won't work,\" then what's a better way to do this?\n>>\n>> AFAIK the only way at the moment is\n>>\n>> * acquire some advisory lock that by convention you use for this sequence\n>> * advance the sequence\n>> * release advisory lock\n>>\n>> There have been previous discussions of this type of problem, eg\n>> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php\n>> but the topic doesn't seem to have come up quite often enough to\n>> motivate anybody to do anything about it. Your particular case could be\n>> handled by a variant of nextval() with a number-of-times-to-advance\n>> argument, but I'm not sure if that's enough for other scenarios.\n>>\n>> regards, tom lane\n>\n> So here's what I came up with. I'm no PLPGSQL guru, but it seemed\n> pretty straightforward.\n>\n> create or replace function nextval_block(bsize integer default 1)\n> returns bigint as $nextval_block$\n> declare\n> bstart bigint;\n> begin\n> perform pg_advisory_lock(1);\n> select into bstart nextval('my_seq');\n> perform setval('my_seq', bstart + bsize, false);\n> perform pg_advisory_unlock(1);\n> return bstart;\n> end;\n> $nextval_block$ language plpgsql;\n\nThat seems unnecessarily complex. how about this:\n\ncreate sequence s;\nselect array_agg (a.b) from (select nextval('s') as b from\ngenerate_series(1,1000)) as a;\n\nThen you just iterate that array for the ids you need.\n\n",
"msg_date": "Tue, 21 Aug 2012 14:03:38 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Tue, Aug 21, 2012 at 2:03 PM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Aug 21, 2012 at 9:32 AM, Craig James <[email protected]> wrote:\n>> On Mon, Aug 20, 2012 at 5:10 PM, Tom Lane <[email protected]> wrote:\n>>> Craig James <[email protected]> writes:\n>>>> I want to do this:\n>>>\n>>>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);\n>>>\n>>>> Now suppose two processes do this simultaneously. Maybe they're in\n>>>> transactions, maybe they're not. Are they guaranteed to get distinct\n>>>> blocks of IDs?\n>>>\n>>> No, because the setval and the nextval are not indivisible.\n>>>\n>>>> Or is it possible that each will execute nextval() and\n>>>> get N and N+1 respectively, and then do setval() to N+1000 and N+1001,\n>>>> resulting in two overlapping blocks.\n>>>\n>>> Exactly.\n>>>\n>>>> If the answer is, \"This won't work,\" then what's a better way to do this?\n>>>\n>>> AFAIK the only way at the moment is\n>>>\n>>> * acquire some advisory lock that by convention you use for this sequence\n>>> * advance the sequence\n>>> * release advisory lock\n>>>\n>>> There have been previous discussions of this type of problem, eg\n>>> http://archives.postgresql.org/pgsql-hackers/2011-09/msg01031.php\n>>> but the topic doesn't seem to have come up quite often enough to\n>>> motivate anybody to do anything about it. Your particular case could be\n>>> handled by a variant of nextval() with a number-of-times-to-advance\n>>> argument, but I'm not sure if that's enough for other scenarios.\n>>>\n>>> regards, tom lane\n>>\n>> So here's what I came up with. I'm no PLPGSQL guru, but it seemed\n>> pretty straightforward.\n>>\n>> create or replace function nextval_block(bsize integer default 1)\n>> returns bigint as $nextval_block$\n>> declare\n>> bstart bigint;\n>> begin\n>> perform pg_advisory_lock(1);\n>> select into bstart nextval('my_seq');\n>> perform setval('my_seq', bstart + bsize, false);\n>> perform pg_advisory_unlock(1);\n>> return bstart;\n>> end;\n>> $nextval_block$ language plpgsql;\n>\n> That seems unnecessarily complex. how about this:\n>\n> create sequence s;\n> select array_agg (a.b) from (select nextval('s') as b from\n> generate_series(1,1000)) as a;\n>\n> Then you just iterate that array for the ids you need.\n\nIf you want it in a comma delimited formate:\n\nselect array_to_string(array_agg (a.b),',') from (select nextval('s')\nas b from generate_series(1,1000)) as a;\n\n",
"msg_date": "Tue, 21 Aug 2012 14:04:59 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Tue, Aug 21, 2012 at 1:03 PM, Scott Marlowe <[email protected]> wrote:\n> That seems unnecessarily complex. how about this:\n>\n> create sequence s;\n> select array_agg (a.b) from (select nextval('s') as b from\n> generate_series(1,1000)) as a;\n>\n> Then you just iterate that array for the ids you need.\n\nFor brevity I didn't explain the use-case in detail. I need a series\nof IDs that are unique across a cluster of servers and across time\n(years and decades). The blocksize might be anywhere from 1 to\n100000. One server is the master and issues all IDs.\n\nI don't want to iterate over an array to get the values because it's\ninefficient: if the blocksize is large (say, 100000 items), it will\nrequire 100000 select() statements. The solution using an advisory\nlock along with setvalue() is nice because the application only makes\none select() statement and gets a block of IDs that are guaranteed to\nbe unique across the cluster.\n\nCraig\n\n",
"msg_date": "Tue, 21 Aug 2012 13:59:06 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
},
{
"msg_contents": "On Tue, Aug 21, 2012 at 2:59 PM, Craig James <[email protected]> wrote:\n> On Tue, Aug 21, 2012 at 1:03 PM, Scott Marlowe <[email protected]> wrote:\n>> That seems unnecessarily complex. how about this:\n>>\n>> create sequence s;\n>> select array_agg (a.b) from (select nextval('s') as b from\n>> generate_series(1,1000)) as a;\n>>\n>> Then you just iterate that array for the ids you need.\n>\n> For brevity I didn't explain the use-case in detail. I need a series\n> of IDs that are unique across a cluster of servers and across time\n> (years and decades). The blocksize might be anywhere from 1 to\n> 100000. One server is the master and issues all IDs.\n>\n> I don't want to iterate over an array to get the values because it's\n> inefficient: if the blocksize is large (say, 100000 items), it will\n> require 100000 select() statements. The solution using an advisory\n> lock along with setvalue() is nice because the application only makes\n> one select() statement and gets a block of IDs that are guaranteed to\n> be unique across the cluster.\n\nAhhh ok. Yeah that's why I said early on I wasn't really sure of your\nuse case, cause that really can make all the difference. Good to\nknow.\n\n",
"msg_date": "Tue, 21 Aug 2012 15:06:56 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does setval(nextval()+N) generate unique blocks of IDs?"
}
] |
[
{
"msg_contents": "I have a PostgreSQL 9.1 cluster. Each node is serving around 1,000 queries per second when we are at a 'steady state'.\n\nWhat I'd like to know is the average query time. I'd like to see if query performance is consistent, or if environmental changes, or code releases, are causing it to drift, spike, or change. I'd also like to be able to compare the (real) query performance on the different nodes.\n\nI know I can put some sort of query wrapper at the application layer to gather and store timing info. (I'm not sure yet how the application would know which node the query just ran on since we are using pgpool between the app and the db.) I'd much rather get something directly out of each database node if I can.\n\nTurning on statement logging crushes the database performance, so I don't want to do that either. (Not to mention I'd still have to parse the logs to get the data.)\n\nIt seems like we almost have everything we need to track this in the stats tables, but not quite. I was hoping the folks on this list would have some tips on how to get query performance trends over time out of each node in my cluster.\n\nThanks!\n\n--\nRick Otten\nData-Systems Engineer\[email protected]\nManta.com<http://manta.com/?referid=emailSig> Where Small Business Grows(tm)\n\n\n\n\n\n\n\n\n\n\nI have a PostgreSQL 9.1 cluster. Each node is serving around 1,000 queries per second when we are at a ‘steady state’.\n \nWhat I’d like to know is the average query time. I’d like to see if query performance is consistent, or if environmental changes, or code releases, are causing it to drift, spike, or change. I’d also like to be able to compare the (real)\n query performance on the different nodes.\n \nI know I can put some sort of query wrapper at the application layer to gather and store timing info. (I’m not sure yet how the application would know which node the query just ran on since we are using pgpool between the app and the db.) \n I’d much rather get something directly out of each database node if I can.\n \nTurning on statement logging crushes the database performance, so I don’t want to do that either. (Not to mention I’d still have to parse the logs to get the data.)\n \nIt seems like we almost have everything we need to track this in the stats tables, but not quite. I was hoping the folks on this list would have some tips on how to get query performance trends over time out of each node in my cluster.\n \nThanks!\n \n-- \nRick Otten\nData-Systems Engineer\[email protected]\nManta.com Where Small Business Grows™",
"msg_date": "Tue, 21 Aug 2012 18:35:22 +0000",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": true,
"msg_subject": "average query performance measuring"
},
{
"msg_contents": "* Rick Otten ([email protected]) wrote:\n> It seems like we almost have everything we need to track this in the stats tables, but not quite. I was hoping the folks on this list would have some tips on how to get query performance trends over time out of each node in my cluster.\n\nI'm afraid the best answer to this is, honestly, \"upgrade to 9.2 once\nit's out\"..\n\nhttp://pgeoghegan.blogspot.com/2012/03/much-improved-statement-statistics.html\n\nIf what's described there doesn't match what you're looking for, then\nplease let us know what else you'd like, so we can further improve\nthings in that area..\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Tue, 21 Aug 2012 14:53:16 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: average query performance measuring"
},
{
"msg_contents": "On 8/21/2012 1:53 PM, Stephen Frost wrote:\n> * Rick Otten ([email protected]) wrote:\n>> It seems like we almost have everything we need to track this in the stats tables, but not quite. I was hoping the folks on this list would have some tips on how to get query performance trends over time out of each node in my cluster.\n> I'm afraid the best answer to this is, honestly, \"upgrade to 9.2 once\n> it's out\"..\n>\n> http://pgeoghegan.blogspot.com/2012/03/much-improved-statement-statistics.html\n>\n> If what's described there doesn't match what you're looking for, then\n> please let us know what else you'd like, so we can further improve\n> things in that area..\n>\n> \tThanks,\n>\n> \t\tStephen\n\nThat looks EXTREMELY useful and I'm looking forward to checking it out\nin 9.2; I have asked a similar question about profiling actual queries\nin the past and basically it came down to \"turn on explain or run a\nseparate explain yourself since the app knows what's similar and what's\nnot\", which of course has hideous performance implications (as the query\nbasically executes twice.)\n\n\n-- \n-- Karl Denninger\n/The Market Ticker ®/ <http://market-ticker.org>\nCuda Systems LLC",
"msg_date": "Tue, 21 Aug 2012 14:48:26 -0500",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: average query performance measuring"
},
{
"msg_contents": "Karl,\n\n* Karl Denninger ([email protected]) wrote:\n> That looks EXTREMELY useful and I'm looking forward to checking it out\n> in 9.2; I have asked a similar question about profiling actual queries\n> in the past and basically it came down to \"turn on explain or run a\n> separate explain yourself since the app knows what's similar and what's\n> not\", which of course has hideous performance implications (as the query\n> basically executes twice.)\n\nJust to clarify one thing- if your application is currently using\nprepared queries for everything, you can probably use the existing\ncontrib module. The difference is that, with 9.2, it'll actually do\nnormalization of non-PREPARED queries and will include some additional\nstatistics and information.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Tue, 21 Aug 2012 16:27:02 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: average query performance measuring"
},
{
"msg_contents": "On 21.8.2012 20:35, Rick Otten wrote:\n> I have a PostgreSQL 9.1 cluster. Each node is serving around 1,000\n> queries per second when we are at a ‘steady state’.\n> \n> What I’d like to know is the average query time. I’d like to see if\n> query performance is consistent, or if environmental changes, or code\n> releases, are causing it to drift, spike, or change. I’d also like to\n> be able to compare the (real) query performance on the different nodes.\n> \n> I know I can put some sort of query wrapper at the application layer to\n> gather and store timing info. (I’m not sure yet how the application\n> would know which node the query just ran on since we are using pgpool\n> between the app and the db.) I’d much rather get something directly\n> out of each database node if I can.\n> \n> Turning on statement logging crushes the database performance, so I\n> don’t want to do that either. (Not to mention I’d still have to parse\n> the logs to get the data.)\n> \n> It seems like we almost have everything we need to track this in the\n> stats tables, but not quite. I was hoping the folks on this list would\n> have some tips on how to get query performance trends over time out of\n> each node in my cluster.\n\nAs others already mentioned, the improvements in pg_stat_statements by\nPeter Geoghean in 9.2 is the first thing you should look into I guess.\nEspecially if you're looking for per-query stats.\n\nIf you're looking for \"global stats,\" you might be interested in an\nextension I wrote a few months ago and collects query histogram. It's\navailable on pgxn.org: http://pgxn.org/dist/query_histogram/\n\nThe question is whether tools like this can give you reliable answers to\nyour questions - that depends on your workload (how much it varies) etc.\n\nTomas\n\n",
"msg_date": "Tue, 21 Aug 2012 23:08:09 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: average query performance measuring"
},
{
"msg_contents": "On 21 August 2012 22:08, Tomas Vondra <[email protected]> wrote:\n> As others already mentioned, the improvements in pg_stat_statements by\n> Peter Geoghean in 9.2 is the first thing you should look into I guess.\n> Especially if you're looking for per-query stats.\n\nIf people would like to know about a better way to monitor query\nexecution costs on earlier versions, I think that I'll probably have\nnew information about that for my talk at Postgres Open.\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n\n",
"msg_date": "Tue, 21 Aug 2012 22:51:12 +0100",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: average query performance measuring"
},
{
"msg_contents": "Thanks! That looks like a handy tool. \n\nI think in this case we'll wait for 9.2. We are looking forward to it.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Tomas Vondra\nSent: Tuesday, August 21, 2012 5:08 PM\nTo: [email protected]\nSubject: Re: [PERFORM] average query performance measuring\n\nOn 21.8.2012 20:35, Rick Otten wrote:\n> I have a PostgreSQL 9.1 cluster. Each node is serving around 1,000 \n> queries per second when we are at a 'steady state'.\n> \n> What I'd like to know is the average query time. I'd like to see if \n> query performance is consistent, or if environmental changes, or code\n> releases, are causing it to drift, spike, or change. I'd also like to\n> be able to compare the (real) query performance on the different nodes.\n> \n> I know I can put some sort of query wrapper at the application layer \n> to gather and store timing info. (I'm not sure yet how the \n> application would know which node the query just ran on since we are using pgpool\n> between the app and the db.) I'd much rather get something directly\n> out of each database node if I can.\n> \n> Turning on statement logging crushes the database performance, so I \n> don't want to do that either. (Not to mention I'd still have to parse \n> the logs to get the data.)\n> \n> It seems like we almost have everything we need to track this in the \n> stats tables, but not quite. I was hoping the folks on this list \n> would have some tips on how to get query performance trends over time \n> out of each node in my cluster.\n\nAs others already mentioned, the improvements in pg_stat_statements by Peter Geoghean in 9.2 is the first thing you should look into I guess.\nEspecially if you're looking for per-query stats.\n\nIf you're looking for \"global stats,\" you might be interested in an extension I wrote a few months ago and collects query histogram. It's available on pgxn.org: http://pgxn.org/dist/query_histogram/\n\nThe question is whether tools like this can give you reliable answers to your questions - that depends on your workload (how much it varies) etc.\n\nTomas\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 22 Aug 2012 14:04:51 +0000",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: average query performance measuring"
}
] |
[
{
"msg_contents": "Howdy. I'm curious what besides raw hardware speed determines the performance of a Seq Scan that comes entirely out of shared buffers… I ran the following on the client's server I'm profiling, which is otherwise idle:\n\nEXPLAIN (ANALYZE ON, BUFFERS ON) SELECT * FROM notes;\n\nSeq Scan on notes (cost=0.00..94004.88 rows=1926188 width=862) (actual time=0.009..1673.702 rows=1926207 loops=1)\n Buffers: shared hit=74743\n Total runtime: 3110.442 ms\n(3 rows)\n\n\n… and that's about 9x slower than what I get on my laptop with the same data. I ran stream-scaling on the machine and the results seem reasonable (8644.1985 MB/s with 1 core -> 25017 MB/s with 12 cores). The box is running 2.6.26.6-49 and postgresql 9.0.6.\n\nI'm stumped as to why it's so much slower, any ideas on what might explain it… or other benchmarks I could run to try to narrow down the cause?\n\nThanks!\n\nMatt\nHowdy. I'm curious what besides raw hardware speed determines the performance of a Seq Scan that comes entirely out of shared buffers… I ran the following on the client's server I'm profiling, which is otherwise idle:EXPLAIN (ANALYZE ON, BUFFERS ON) SELECT * FROM notes;Seq Scan on notes (cost=0.00..94004.88 rows=1926188 width=862) (actual time=0.009..1673.702 rows=1926207 loops=1) Buffers: shared hit=74743 Total runtime: 3110.442 ms(3 rows)… and that's about 9x slower than what I get on my laptop with the same data. I ran stream-scaling on the machine and the results seem reasonable (8644.1985 MB/s with 1 core -> 25017 MB/s with 12 cores). The box is running 2.6.26.6-49 and postgresql 9.0.6.I'm stumped as to why it's so much slower, any ideas on what might explain it… or other benchmarks I could run to try to narrow down the cause?Thanks!Matt",
"msg_date": "Tue, 21 Aug 2012 15:57:13 -0700",
"msg_from": "Matt Daw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of Seq Scan from buffer cache"
},
{
"msg_contents": "Ugh, never mind. I ran ltrace and it's spending 99% of its time in gettimeofday. \n\nselect count(*) from notes;\n count \n---------\n 1926207\n(1 row)\n\nTime: 213.950 ms\n\nexplain analyze select count(*) from notes;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=99274.59..99274.60 rows=1 width=0) (actual time=2889.325..2889.325 rows=1 loops=1)\n -> Seq Scan on notes (cost=0.00..94459.07 rows=1926207 width=0) (actual time=0.005..1475.218 rows=1926207 loops=1)\n Total runtime: 2889.360 ms\n(3 rows)\n\nTime: 2889.842 ms \n\n\nOn Tuesday, 21 August, 2012 at 3:57 PM, Matt Daw wrote:\n\n> Howdy. I'm curious what besides raw hardware speed determines the performance of a Seq Scan that comes entirely out of shared buffers… I ran the following on the client's server I'm profiling, which is otherwise idle:\n> \n> EXPLAIN (ANALYZE ON, BUFFERS ON) SELECT * FROM notes;\n> \n> Seq Scan on notes (cost=0.00..94004.88 rows=1926188 width=862) (actual time=0.009..1673.702 rows=1926207 loops=1)\n> Buffers: shared hit=74743\n> Total runtime: 3110.442 ms\n> (3 rows)\n> \n> \n> … and that's about 9x slower than what I get on my laptop with the same data. I ran stream-scaling on the machine and the results seem reasonable (8644.1985 MB/s with 1 core -> 25017 MB/s with 12 cores). The box is running 2.6.26.6-49 and postgresql 9.0.6.\n> \n> I'm stumped as to why it's so much slower, any ideas on what might explain it… or other benchmarks I could run to try to narrow down the cause?\n> \n> Thanks!\n> \n> Matt \n\n\n\nUgh, never mind. I ran ltrace and it's spending 99% of its time in gettimeofday.\nselect count(*) from notes; count --------- 1926207(1 row)Time: 213.950 msexplain analyze select count(*) from notes; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------ Aggregate (cost=99274.59..99274.60 rows=1 width=0) (actual time=2889.325..2889.325 rows=1 loops=1) -> Seq Scan on notes (cost=0.00..94459.07 rows=1926207 width=0) (actual time=0.005..1475.218 rows=1926207 loops=1) Total runtime: 2889.360 ms(3 rows)Time: 2889.842 ms\n\nOn Tuesday, 21 August, 2012 at 3:57 PM, Matt Daw wrote:\n\nHowdy. I'm curious what besides raw hardware speed determines the performance of a Seq Scan that comes entirely out of shared buffers… I ran the following on the client's server I'm profiling, which is otherwise idle:EXPLAIN (ANALYZE ON, BUFFERS ON) SELECT * FROM notes;Seq Scan on notes (cost=0.00..94004.88 rows=1926188 width=862) (actual time=0.009..1673.702 rows=1926207 loops=1) Buffers: shared hit=74743 Total runtime: 3110.442 ms(3 rows)… and that's about 9x slower than what I get on my laptop with the same data. I ran stream-scaling on the machine and the results seem reasonable (8644.1985 MB/s with 1 core -> 25017 MB/s with 12 cores). The box is running 2.6.26.6-49 and postgresql 9.0.6.I'm stumped as to why it's so much slower, any ideas on what might explain it… or other benchmarks I could run to try to narrow down the cause?Thanks!Matt",
"msg_date": "Tue, 21 Aug 2012 16:59:51 -0700",
"msg_from": "Matt Daw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of Seq Scan from buffer cache"
},
{
"msg_contents": "On Tue, Aug 21, 2012 at 6:59 PM, Matt Daw <[email protected]> wrote:\n> Ugh, never mind. I ran ltrace and it's spending 99% of its time in\n> gettimeofday.\n\nyeah -- this is a fairly common report. some systems (windows) have a\nless accurate but much faster gettimeofday().\n\nmerlin\n\n",
"msg_date": "Thu, 23 Aug 2012 08:32:12 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of Seq Scan from buffer cache"
}
] |
[
{
"msg_contents": "Hello List,\n\nI've got a system on a customers location which has a XEON E5504 @ 2.00GHz Processor (HP Proliant)\n\nIt's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:\n\nThe Postgres Performance on this system measured with pgbench is very poor:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 158.283272 (including connections establishing)\ntps = 158.788545 (excluding connections establishing)\n\nThe same database on a Core i7 CPU 920 @ 2.67GHz, 8 cores with 8GB RAM same distro and Postgresql Version is much faster:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 1040.534002 (including connections establishing)\ntps = 1065.215134 (excluding connections establishing)\n\nEven optimizing the postgresql.conf values doesn't change a lot on the tps values. (less than 10%)\n\nTried Postgresql 9.1 on the Proliant:\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 40\nnumber of threads: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 53.114978 (including connections establishing)\ntps = 53.198667 (excluding connections establishing)\n\nNext was to compare the diskperformance which was much better on the XEON than on the Intel i7.\n\nAny idea where to search for the bottleneck?\n\nMit freundlichen Grüßen\n\nFelix Schubert\n\nFEScon\n... and work flows!\n\nfelix schubert\nhaspelgasse 5\n69117 heidelberg\n\nmobil: +49-151-25337718\nmail: [email protected]\nskype: fesmac\n\n\nHello List,I've got a system on a customers location which has a XEON E5504 @ 2.00GHz Processor (HP Proliant)It's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:The Postgres Performance on this system measured with pgbench is very poor:transaction type: TPC-B (sort of)scaling factor: 1query mode: simplenumber of clients: 40number of transactions per client: 100number of transactions actually processed: 4000/4000tps = 158.283272 (including connections establishing)tps = 158.788545 (excluding connections establishing)The same database on a Core i7 CPU 920 @ 2.67GHz, 8 cores with 8GB RAM same distro and Postgresql Version is much faster:transaction type: TPC-B (sort of)scaling factor: 1query mode: simplenumber of clients: 40number of transactions per client: 100number of transactions actually processed: 4000/4000tps = 1040.534002 (including connections establishing)tps = 1065.215134 (excluding connections establishing)Even optimizing the postgresql.conf values doesn't change a lot on the tps values. (less than 10%)Tried Postgresql 9.1 on the Proliant:transaction type: TPC-B (sort of)scaling factor: 1query mode: simplenumber of clients: 40number of threads: 1number of transactions per client: 100number of transactions actually processed: 4000/4000tps = 53.114978 (including connections establishing)tps = 53.198667 (excluding connections establishing)Next was to compare the diskperformance which was much better on the XEON than on the Intel i7.Any idea where to search for the bottleneck?\nMit freundlichen GrüßenFelix SchubertFEScon... and work flows!felix schuberthaspelgasse 569117 heidelbergmobil: +49-151-25337718mail: [email protected]: fesmac",
"msg_date": "Fri, 24 Aug 2012 11:47:45 +0200",
"msg_from": "Felix Schubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Performance on a XEON E5504"
},
{
"msg_contents": "On 08/24/2012 05:47 AM, Felix Schubert wrote:\n> Hello List,\n>\n> I've got a system on a customers location which has a XEON E5504 @ \n> 2.00GHz Processor (HP Proliant)\n>\n> It's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:\n>\n> The Postgres Performance on this system measured with pgbench is very \n> poor:\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 40\n> number of transactions per client: 100\n> number of transactions actually processed: 4000/4000\n> tps = 158.283272 (including connections establishing)\n> tps = 158.788545 (excluding connections establishing)\n>\n> The same database on a Core i7 CPU 920 @ 2.67GHz, 8 cores with 8GB RAM \n> same distro and Postgresql Version is much faster:\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 40\n> number of transactions per client: 100\n> number of transactions actually processed: 4000/4000\n> tps = 1040.534002 (including connections establishing)\n> tps = 1065.215134 (excluding connections establishing)\n>\n> Even optimizing the postgresql.conf values doesn't change a lot on the \n> tps values. (less than 10%)\n>\n> Tried Postgresql 9.1 on the Proliant:\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 40\n> number of threads: 1\n> number of transactions per client: 100\n> number of transactions actually processed: 4000/4000\n> tps = 53.114978 (including connections establishing)\n> tps = 53.198667 (excluding connections establishing)\n>\n> Next was to compare the diskperformance which was much better on the \n> XEON than on the Intel i7.\n>\n> Any idea where to search for the bottleneck?\nRegards, Felix.\nThere are many question there:\n- Are you using the same disc models in both systems (Xeon and Intel i7)?\n- Which are the values for work_mem, shared_buffers, \nmaintainance_work_men, effective_io_cache, etc ?\n- Is PostgreSQL the unique service in these servers?\n\nMy first advice is that PostgreSQL 9.2 was released today, which has a \nlot of major performance improvements, so,\nyou should update your both installations to this new version to obtain \na better performance, security and stability.\n\nBest wishes\n>\n> Mit freundlichen Gr��en\n>\n> Felix Schubert\n>\n> FEScon\n> ... and work flows!\n>\n> felix schubert\n> haspelgasse 5\n> 69117 heidelberg\n>\n> mobil: +49-151-25337718\n> mail: [email protected] <mailto:[email protected]>\n> skype: fesmac\n>\n>\n>\n> <http://www.uci.cu/>\n\n\n\n10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...\nCONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION\n\nhttp://www.uci.cu\nhttp://www.facebook.com/universidad.uci\nhttp://www.flickr.com/photos/universidad_uci\n\n\n\n\n\n\nOn 08/24/2012 05:47 AM, Felix Schubert\n wrote:\n\n\n\n Hello List,\n \n\nI've got a system on a customers location which has a XEON\n E5504 @ 2.00GHz Processor (HP Proliant)\n\n\nIt's postgres 8.4 on a Debian Squeeze System running with 8GB\n of ram:\n\n\nThe Postgres Performance on this system measured with pgbench\n is very poor:\n\n\n\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 158.283272 (including connections establishing)\ntps = 158.788545 (excluding connections establishing)\n\n\n\nThe same database on a Core i7 CPU 920 @ 2.67GHz, 8 cores\n with 8GB RAM same distro and Postgresql Version is much faster:\n\n\n\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 1040.534002 (including connections establishing)\ntps = 1065.215134 (excluding connections establishing)\n\n\n\nEven optimizing the postgresql.conf values doesn't change a\n lot on the tps values. (less than 10%)\n\n\nTried Postgresql 9.1 on the Proliant:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 40\nnumber of threads: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 53.114978 (including connections establishing)\ntps = 53.198667 (excluding connections establishing)\n\n\n\nNext was to compare the diskperformance which was much better\n on the XEON than on the Intel i7.\n\n\nAny idea where to search for the bottleneck?\n\n Regards, Felix.\n There are many question there:\n - Are you using the same disc models in both systems (Xeon and Intel\n i7)?\n - Which are the values for work_mem, shared_buffers,\n maintainance_work_men, effective_io_cache, etc ?\n - Is PostgreSQL the unique service in these servers?\n\n My first advice is that PostgreSQL 9.2 was released today, which has\n a lot of major performance improvements, so, \n you should update your both installations to this new version to\n obtain a better performance, security and stability.\n\n Best wishes\n\n\n\n\n\nMit\n freundlichen Grüßen\n\n\nFelix\n Schubert\n\n\nFEScon\n... and work\n flows!\n\n\nfelix schubert\nhaspelgasse\n 5\n69117\n heidelberg\n\n\nmobil:\n +49-151-25337718\nmail: [email protected]\nskype:\n fesmac",
"msg_date": "Mon, 10 Sep 2012 16:58:57 -0400",
"msg_from": "Marcos Ortiz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Performance on a XEON E5504"
}
] |
[
{
"msg_contents": "Maybe I should post this in Hackers instead, but I figured I'd start \nhere to avoid cluttering up that list.\n\nSo, we know we have a way of doing a loose index scan with CTEs:\n\nhttp://wiki.postgresql.org/wiki/Loose_indexscan\n\nBut that got me wondering. The planner knows from pg_stats that col1 \ncould have low cardinality. If that's the case, and a WHERE clause uses \na two column index, and col2 is specified, why can't it walk each \nindividual bucket in the two-column index, and use col2? So I forced \nsuch a beast with a CTE:\n\nWITH RECURSIVE t AS (\n SELECT min(col1) AS col1\n FROM tablename\n UNION ALL\n SELECT (SELECT min(col1)\n FROM tablename\n WHERE col1 > t.col1)\n FROM t\n WHERE t.col1 IS NOT NULL\n)\nSELECT p.*\n FROM t\n JOIN tablename p USING (col1)\n where p.col2 = 12345\n\nI ask, because while the long-term fix would be to re-order the index to \n(col2, col1), this seems like a situation the planner could easily \ndetect and compensate for. In our particular example, execution time \nwent from 160ms to 2ms with the CTE rewrite. This is a contrived \nexample, but it seems like loose index scans would be useful in other \nways. Heck, this:\n\nSELECT DISTINCT col1\n FROM tablename;\n\nHas terrible performance because it always seems to revert to a sequence \nscan, but it's something people do *all the time*. I can't reasonably \nexpect all of my devs to switch to that admittedly gross CTE to get a \nfaster effect, so I'm just thinking out loud.\n\nUntil PG puts in something to fix this, I plan on writing a stored \nprocedure that writes a dynamic CTE and returns a corresponding result \nset. It's not ideal, but it would solve our particular itch. Really, \nthis should be possible with any indexed column, so I might abstract it.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 24 Aug 2012 11:20:21 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Loose Index Scans by Planner?"
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n \n> So, we know we have a way of doing a loose index scan with CTEs:\n> \n> http://wiki.postgresql.org/wiki/Loose_indexscan\n \nI tried this on a table in production with 23 million rows for a\ncolumn with 45 distinct values which is the high-order column of a\nfour-column index. This ran in 445 ms first time and 2 ms on the\nsecond and subsequent tries. The equivalent SELECT DISTINCT ran in\n30 seconds first time, and got down to 11.5 seconds after a few\nruns. So roughly two orders of magnitude faster with a cold cache\nand three orders of magnitude faster with a warm cache.\n \nThat sure would be a nice optimization to have in the planner.\n \n> But that got me wondering. The planner knows from pg_stats that\n> col1 could have low cardinality. If that's the case, and a WHERE\n> clause uses a two column index, and col2 is specified, why can't\n> it walk each individual bucket in the two-column index, and use\n> col2? So I forced such a beast with a CTE:\n> \n> WITH RECURSIVE t AS (\n> SELECT min(col1) AS col1\n> FROM tablename\n> UNION ALL\n> SELECT (SELECT min(col1)\n> FROM tablename\n> WHERE col1 > t.col1)\n> FROM t\n> WHERE t.col1 IS NOT NULL\n> )\n> SELECT p.*\n> FROM t\n> JOIN tablename p USING (col1)\n> where p.col2 = 12345\n> \n> I ask, because while the long-term fix would be to re-order the\n> index to (col2, col1), this seems like a situation the planner\n> could easily detect and compensate for. In our particular example,\n> execution time went from 160ms to 2ms with the CTE rewrite.\n \nWell, that'd be the icing on the cake. I'd be overjoyed to get the\ncake. :-)\n \n-Kevin\n\n",
"msg_date": "Fri, 24 Aug 2012 14:40:57 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loose Index Scans by Planner?"
},
{
"msg_contents": "On 08/24/2012 02:40 PM, Kevin Grittner wrote:\n\n> Well, that'd be the icing on the cake. I'd be overjoyed to get the\n> cake. :-)\n\nYes indeed. The \"cake\" would fix the DISTINCT case, which I see way more \noften in the wild than my index column-skip.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Fri, 24 Aug 2012 16:22:12 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Loose Index Scans by Planner?"
},
{
"msg_contents": "On Fri, Aug 24, 2012 at 9:20 AM, Shaun Thomas <[email protected]> wrote:\n> Maybe I should post this in Hackers instead, but I figured I'd start here to\n> avoid cluttering up that list.\n>\n> So, we know we have a way of doing a loose index scan with CTEs:\n>\n> http://wiki.postgresql.org/wiki/Loose_indexscan\n>\n> But that got me wondering. The planner knows from pg_stats that col1 could\n> have low cardinality. If that's the case, and a WHERE clause uses a two\n> column index, and col2 is specified, why can't it walk each individual\n> bucket in the two-column index, and use col2? So I forced such a beast with\n> a CTE:\n>\n> WITH RECURSIVE t AS (\n> SELECT min(col1) AS col1\n> FROM tablename\n> UNION ALL\n> SELECT (SELECT min(col1)\n> FROM tablename\n> WHERE col1 > t.col1)\n> FROM t\n> WHERE t.col1 IS NOT NULL\n> )\n> SELECT p.*\n> FROM t\n> JOIN tablename p USING (col1)\n> where p.col2 = 12345\n\nThat is awesome. I had never though of trying to do it that way.\n\n> I ask, because while the long-term fix would be to re-order the index to\n> (col2, col1),\n\nNot always. The case for having (col1,col2) might be very compelling.\n And having to maintain both orderings when just maintaining one would\nbe \"good enough\" would kind of suck. Having the planner do the best\nit can given the index it has is a good thing.\n\nI would also note that having this feature (called \"skip scan\" in some\nother products) would mimic what happens when you need to do a query\nspecifying col2 but not col1 on a table family which is list\npartitioned on col1. Getting some of the benefits of partitioning\nwithout having to actually do the partitioning would be a good thing.\n\n> this seems like a situation the planner could easily detect\n> and compensate for.\n\nYes, it is just a Small Matter Of Programming :)\n\nAnd one I've wanted for a while.\n\nIf only someone else would offer to do it for me....\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 24 Aug 2012 20:42:40 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Loose Index Scans by Planner?"
},
{
"msg_contents": "> Not always. The case for having (col1,col2) might be very\n> compelling.\n\nTrue. But in our case, the table has like 8M rows, so and col1 is some kind of job identifier, so it's evenly distributed. Col2 on the other hand is a customer id, so it has much higher cardinality. Previous DBA missed it during creation, and it was never loud enough in the logs for me to notice it until recently. Looks like I need to do a column-ordering audit. :)\n\n> If only someone else would offer to do it for me....\n\nDon't look at me. My C is rustier than a 50-year-old bike chain. :)\n\n\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Sat, 25 Aug 2012 14:39:11 +0000",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Loose Index Scans by Planner?"
},
{
"msg_contents": "I'd like to create a ToDo item for \"loose index scans\" or \"skip\nscans\", when the lead column has low cardinality and is not used in\nthe where clause. This case can potentially be optimized by using the\nindex as if it were a collection of N \"partitioned\" indexes, where N\nis the cardinality of the lead column. Any objections?\n\nI don't really have a detailed plan on how to do it. I expect the\nplanner part would be harder than the execution part.\n\nSee \"[PERFORM] Loose Index Scans by Planner\" thread.\n\nThanks,\n\nJeff\n\n",
"msg_date": "Sun, 2 Sep 2012 14:53:45 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Fwd: [PERFORM] Loose Index Scans by Planner?"
}
] |
[
{
"msg_contents": "Hello,\n\n\nI would like to create some application using triggers and LISTEN/NOTIFY\nframework. I've tested it, and I noticed that performance of NOTIFY\nsignifically decreases with increasing number of distinct NOTIFIES in\ntransaction. \nI found that function AsyncExistsPendingNotify is responsibe for it. I think\nthat complexivity of searching duplicates there is O(N^2). Would be possible\nto improve performance of it? Maybe by using list for elements precedence\nand binary search tree for searching duplicates - with complexivity of\nO(Nlog2(N)).\n\nI'v tested with 50000 of NOTICES. Updating table with 20000 NOTICES when\nsearching is not performed took 1,5 second. With searching it took 28\nseconds.\n\n-------------------------------------------\nArtur Zajac\n\n\n\n\n",
"msg_date": "Fri, 24 Aug 2012 20:46:42 +0200",
"msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "NOTIFY performance"
},
{
"msg_contents": "On Fri, Aug 24, 2012 at 1:46 PM, Artur Zając <[email protected]> wrote:\n> Hello,\n>\n>\n> I would like to create some application using triggers and LISTEN/NOTIFY\n> framework. I've tested it, and I noticed that performance of NOTIFY\n> significally decreases with increasing number of distinct NOTIFIES in\n> transaction.\n> I found that function AsyncExistsPendingNotify is responsibe for it. I think\n> that complexivity of searching duplicates there is O(N^2). Would be possible\n> to improve performance of it? Maybe by using list for elements precedence\n> and binary search tree for searching duplicates - with complexivity of\n> O(Nlog2(N)).\n>\n> I'v tested with 50000 of NOTICES. Updating table with 20000 NOTICES when\n> searching is not performed took 1,5 second. With searching it took 28\n> seconds.\n\nI've confirmed the n^2 behavior on 9.2:\npostgres=# select pg_notify(v::text, null) from generate_series(1,10000) v;\nTime: 281.000 ms\npostgres=# select pg_notify(v::text, null) from generate_series(1,50000) v;\nTime: 7148.000 ms\n\n...but i'm curious if you're going about things the right\nway...typically I'd imagine you'd write out actionable items to a\ntable and issue a much broader NOTIFY which taps listeners on the\ntable to search the action queue. Could you describe your problem in\na little more detail?\n\nmerlin\n\n",
"msg_date": "Fri, 24 Aug 2012 14:12:05 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOTIFY performance"
},
{
"msg_contents": ">> I would like to create some application using triggers and \n>> LISTEN/NOTIFY framework. I've tested it, and I noticed that \n>> performance of NOTIFY significally decreases with increasing number of \n>> distinct NOTIFIES in transaction.\n>> I found that function AsyncExistsPendingNotify is responsibe for it. I \n>> think that complexivity of searching duplicates there is O(N^2). Would \n>> be possible to improve performance of it? Maybe by using list for \n>> elements precedence and binary search tree for searching duplicates - \n>> with complexivity of O(Nlog2(N)).\n>>\n>> I'v tested with 50000 of NOTICES. Updating table with 20000 NOTICES \n>> when searching is not performed took 1,5 second. With searching it \n>> took 28 seconds.\n>\n>I've confirmed the n^2 behavior on 9.2:\n>postgres=# select pg_notify(v::text, null) from generate_series(1,10000) v;\n>Time: 281.000 ms\n>postgres=# select pg_notify(v::text, null) from generate_series(1,50000) v;\n>Time: 7148.000 ms\n>\n>...but i'm curious if you're going about things the right way...typically I'd imagine you'd write out actionable items to a table and issue a much broader NOTIFY which taps listeners on the table to search the action queue. Could you describe your problem in >a little more detail?\n\nWhen there was only NOTIFY option with simple channel name there was no need to send so many messages - creating 50000 channels would be really stupid. NOTIFY to channel might only mean that there is sth new in table or sth similar. But with payload option it would be possible to make simple system for notify other database clients (or self notify - when changes are made by triggers) that some single record has changed and it should be invalidated in client cache. I would made (and I already made) that system (similar to streaming replication :) but more more simple), but unfortunately even not big update on table would kill my system with complexivity O(N^2). In general , I know that this system would be not efficient, but for my application it would simply solve my many problems.\n\n-------------------------------------------\nArtur Zajac\n\n\n\n\n\n\n",
"msg_date": "Fri, 24 Aug 2012 23:11:08 +0200",
"msg_from": "=?utf-8?Q?Artur_Zaj=C4=85c?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOTIFY performance"
},
{
"msg_contents": "On Fri, Aug 24, 2012 at 4:11 PM, Artur Zając <[email protected]> wrote:\n>>> I would like to create some application using triggers and\n>>> LISTEN/NOTIFY framework. I've tested it, and I noticed that\n>>> performance of NOTIFY significally decreases with increasing number of\n>>> distinct NOTIFIES in transaction.\n>>> I found that function AsyncExistsPendingNotify is responsibe for it. I\n>>> think that complexivity of searching duplicates there is O(N^2). Would\n>>> be possible to improve performance of it? Maybe by using list for\n>>> elements precedence and binary search tree for searching duplicates -\n>>> with complexivity of O(Nlog2(N)).\n>>>\n>>> I'v tested with 50000 of NOTICES. Updating table with 20000 NOTICES\n>>> when searching is not performed took 1,5 second. With searching it\n>>> took 28 seconds.\n>>\n>>I've confirmed the n^2 behavior on 9.2:\n>>postgres=# select pg_notify(v::text, null) from generate_series(1,10000) v;\n>>Time: 281.000 ms\n>>postgres=# select pg_notify(v::text, null) from generate_series(1,50000) v;\n>>Time: 7148.000 ms\n>>\n>>...but i'm curious if you're going about things the right way...typically I'd imagine you'd write out actionable items to a table and issue a much broader NOTIFY which taps listeners on the table to search the action queue. Could you describe your problem in >a little more detail?\n>\n> When there was only NOTIFY option with simple channel name there was no need to send so many messages - creating 50000 channels would be really stupid. NOTIFY to channel might only mean that there is sth new in table or sth similar. But with payload option it would be possible to make simple system for notify other database clients (or self notify - when changes are made by triggers) that some single record has changed and it should be invalidated in client cache. I would made (and I already made) that system (similar to streaming replication :) but more more simple), but unfortunately even not big update on table would kill my system with complexivity O(N^2). In general , I know that this system would be not efficient, but for my application it would simply solve my many problems.\n\nYeah -- my take is that you're pushing too much information through\nthe notify. If I was in your shoes, I'd be notifying the client to\ncome and check and invalidation queue which would be updated through\nsome sort of trigger. The payload options is great in that it can\nsave you a round trip in some latency sensitive cases but it's not a\nreplacement for a proper queue.\n\nmerlin\n\n",
"msg_date": "Tue, 28 Aug 2012 10:11:09 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOTIFY performance"
},
{
"msg_contents": "On Fri, Aug 24, 2012 at 11:46 AM, Artur Zając <[email protected]> wrote:\n> Hello,\n>\n>\n> I would like to create some application using triggers and LISTEN/NOTIFY\n> framework. I've tested it, and I noticed that performance of NOTIFY\n> significally decreases with increasing number of distinct NOTIFIES in\n> transaction.\n> I found that function AsyncExistsPendingNotify is responsibe for it. I think\n> that complexivity of searching duplicates there is O(N^2). Would be possible\n> to improve performance of it? Maybe by using list for elements precedence\n> and binary search tree for searching duplicates - with complexivity of\n> O(Nlog2(N)).\n\nI wonder if should be trying to drop duplicates at all. I think that\ndoing that made a lot more sense before payloads existed.\n\nThe docs said that the system \"can\" drop duplicates, so making it no\nlonger do so would be backwards compatible.\n\nMaybe drop duplicates where the payload was the empty string, but keep\nthem otherwise?\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 31 Aug 2012 12:54:26 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOTIFY performance"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> I wonder if should be trying to drop duplicates at all. I think that\n> doing that made a lot more sense before payloads existed.\n\nPerhaps, but we have a lot of history to be backwards-compatible with.\n\n> The docs said that the system \"can\" drop duplicates, so making it no\n> longer do so would be backwards compatible.\n\nMaybe compatible from a language-lawyerly point of view, but the\nperformance characteristics would be hugely different - and since this\ncomplaint is entirely about performance, I don't think it's fair to\nignore that. We'd be screwing people who've depended on the historical\nbehavior to accommodate people who expect something that never worked\nwell before to start working well.\n\nThe case that I'm specifically worried about is rules and triggers that\nissue NOTIFY without worrying about generating lots of duplicates when\nmany rows are updated in one command.\n\n> Maybe drop duplicates where the payload was the empty string, but keep\n> them otherwise?\n\nMaybe, but that seems pretty weird/unpredictable. (In particular, if\nyou have a mixed workload with some of both types of notify, you lose\ntwice: some of the inserts will need to scan the list, so that cost\nis still quadratic, but you still have a huge event list to dump into\nthe queue when the time comes.)\n\nI seem to recall that we discussed the idea of checking only the last N\nnotifies for duplicates, for some reasonably small N (somewhere between\n10 and 100 perhaps). That would prevent the quadratic behavior and yet\nalso eliminate dups in most of the situations where it would matter.\nAny N>1 would require a more complicated data structure than is there\nnow, but it doesn't seem that hard.\n\nThe other thing we'd need to find out is whether that's the only problem\nfor generating bazillions of notify events per transaction. It won't\nhelp to hack AsyncExistsPendingNotify if dropping the events into the\nqueue is still too expensive. I am worried about the overall processing\ncost here, consumers and producers both.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 31 Aug 2012 16:22:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOTIFY performance"
},
{
"msg_contents": "On Fri, Aug 31, 2012 at 1:22 PM, Tom Lane <[email protected]> wrote:\n> Jeff Janes <[email protected]> writes:\n>> I wonder if should be trying to drop duplicates at all. I think that\n>> doing that made a lot more sense before payloads existed.\n>\n> Perhaps, but we have a lot of history to be backwards-compatible with.\n>\n>> The docs said that the system \"can\" drop duplicates, so making it no\n>> longer do so would be backwards compatible.\n>\n> Maybe compatible from a language-lawyerly point of view, but the\n> performance characteristics would be hugely different - and since this\n> complaint is entirely about performance, I don't think it's fair to\n> ignore that. We'd be screwing people who've depended on the historical\n> behavior to accommodate people who expect something that never worked\n> well before to start working well.\n>\n> The case that I'm specifically worried about is rules and triggers that\n> issue NOTIFY without worrying about generating lots of duplicates when\n> many rows are updated in one command.\n\nWould those ones generally have an empty payload? I would think they\ndo, but I have not yet used NOTIFY in anger.\n\n>\n>> Maybe drop duplicates where the payload was the empty string, but keep\n>> them otherwise?\n>\n> Maybe, but that seems pretty weird/unpredictable. (In particular, if\n> you have a mixed workload with some of both types of notify, you lose\n> twice: some of the inserts will need to scan the list, so that cost\n> is still quadratic, but you still have a huge event list to dump into\n> the queue when the time comes.)\n\nBut only empties would need to do searches, and you would only need to\nsearch a list of channels that have already seen an empty in that same\nsub-transaction (with an auxiliary data structure), so it would only\nbe quadratic if you used a huge number of channels.\n\n\nPerhaps an adaptive heuristic could be used, where if you sent 1000\nnotices in this subtransaction and the resulting queue is more than,\nsay, 900, we stop looking for any more duplicates because there don't\nseem to be many. And even then, still check the most recent one in\nthe queue, just not the whole list.\n\nI think it would virtually take an act of malice to start out sending\nall distinct messages, then to switch to sending mostly replicates,\nbut with the replicates interleaved.\n\n...\n\n>\n> The other thing we'd need to find out is whether that's the only problem\n> for generating bazillions of notify events per transaction. It won't\n> help to hack AsyncExistsPendingNotify if dropping the events into the\n> queue is still too expensive. I am worried about the overall processing\n> cost here, consumers and producers both.\n\nAsyncExistsPendingNotify is head and shoulders above anything else, as\nlong as we can assume someone can do something meaningful with the\nmessages in constant and reasonable time.\n\nThe time in milliseconds to send x notifies in one subtrans is about\n(valid only for large x):\n\nt = 1.9e-05 * x^2\n\nSo to send 10,000,000 would take about 22 days just on the sending side.\n\nIf I knock out the AsyncExistsPendingNotify with the attached patch, I\ncan send 10,000,000 notifies in 50.1 seconds, about 10 seconds on the\nnotifier side pre-commit and the rest post-commit for the notifier and\non the listen side. (According to \"top\", most of the time seems to\nbe CPU time of psql serving as the listener. According to opreport,\nit is not.)\n\nI used this as the listener:\n\nperl -le 'print \"listen foo;\"; print \"select pg_sleep(1), now();\"\nforeach 1..10000000'| psql > foo\n\nAnd this as the notifier:\n\nselect now(); select count(pg_notify('foo', gen::text)) from\ngenerate_series(1,10000000) as gen;\n\nTime was measured from when the sender started sending to when the\nlistener finished writing out async messages and hit the next now().\n\n\nCheers,\n\nJeff",
"msg_date": "Fri, 31 Aug 2012 19:12:28 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOTIFY performance"
}
] |
[
{
"msg_contents": "Hello List,\n\nI've got a system on a customers location which has a XEON E5504 @ 2.00GHz Processor (HP Proliant)\n\nIt's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:\n\nThe Postgres Performance on this system measured with pgbench is very poor:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 158.283272 (including connections establishing)\ntps = 158.788545 (excluding connections establishing)\n\nThe same database on a Core i7 CPU 920 @ 2.67GHz, 8 cores with 8GB RAM same distro and Postgresql Version is much faster:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 1040.534002 (including connections establishing)\ntps = 1065.215134 (excluding connections establishing)\n\nEven optimizing the postgresql.conf values doesn't change a lot on the tps values. (less than 10%)\n\nTried Postgresql 9.1 on the Proliant:\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 40\nnumber of threads: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 53.114978 (including connections establishing)\ntps = 53.198667 (excluding connections establishing)\n\nNext was to compare the diskperformance which was much better on the XEON than on the Intel i7.\n\nAny idea where to search for the bottleneck?\n\nbest regards,\n\nFelix Schubert\n",
"msg_date": "Sat, 25 Aug 2012 14:07:34 +0200",
"msg_from": "Felix Schubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Performance on a XEON E5504"
},
{
"msg_contents": "On Sat, Aug 25, 2012 at 6:07 AM, Felix Schubert <[email protected]> wrote:\n> Hello List,\n>\n> I've got a system on a customers location which has a XEON E5504 @ 2.00GHz Processor (HP Proliant)\n>\n> It's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:\n>\n> The Postgres Performance on this system measured with pgbench is very poor:\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 40\n> number of transactions per client: 100\n> number of transactions actually processed: 4000/4000\n> tps = 158.283272 (including connections establishing)\n> tps = 158.788545 (excluding connections establishing)\n\nFor a single thread on a 10k RPM drive the maximum number of times per\nsecond you can write and get a proper fsync back is 166. This is\nquite close to that theoretical max.\n\n> The same database on a Core i7 CPU 920 @ 2.67GHz, 8 cores with 8GB RAM same distro and Postgresql Version is much faster:\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 40\n> number of transactions per client: 100\n> number of transactions actually processed: 4000/4000\n> tps = 1040.534002 (including connections establishing)\n> tps = 1065.215134 (excluding connections establishing)\n\nThis is much faster than the theoretical limit of a single 10k RPM\ndrive obeying fsync.\n\nI'll ignore the rest of your post where you get 53 tps after\noptimization. The important thing you forgot to mention was your\ndrive subsystem here. I'm gonna take a wild guess that they are both\non a single drive and that the older machine is using an older SATA or\nPATA interface HD that is lying about fsync, and the new machine is\nusing a 10k RPM drive that is not lying about fsync and you are\ngetting a proper ~150 tps from it.\n\nSo, what kind of IO subsystems you got in those things?\n\n",
"msg_date": "Sat, 25 Aug 2012 06:42:21 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Performance on a XEON E5504"
},
{
"msg_contents": "Hi Scott,\n\nthe controller is a HP i410 running 3x300GB SAS 15K / Raid 5 \n\nMit freundlichen Grüßen\n\nFelix Schubert\n\nVon meinem iPhone gesendet :-)\n\nAm 25.08.2012 um 14:42 schrieb Scott Marlowe <[email protected]>:\n\n> On Sat, Aug 25, 2012 at 6:07 AM, Felix Schubert <[email protected]> wrote:\n>> Hello List,\n>> \n>> I've got a system on a customers location which has a XEON E5504 @ 2.00GHz Processor (HP Proliant)\n>> \n>> It's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:\n>> \n>> The Postgres Performance on this system measured with pgbench is very poor:\n>> \n>> transaction type: TPC-B (sort of)\n>> scaling factor: 1\n>> query mode: simple\n>> number of clients: 40\n>> number of transactions per client: 100\n>> number of transactions actually processed: 4000/4000\n>> tps = 158.283272 (including connections establishing)\n>> tps = 158.788545 (excluding connections establishing)\n> \n> For a single thread on a 10k RPM drive the maximum number of times per\n> second you can write and get a proper fsync back is 166. This is\n> quite close to that theoretical max.\n> \n>> The same database on a Core i7 CPU 920 @ 2.67GHz, 8 cores with 8GB RAM same distro and Postgresql Version is much faster:\n>> \n>> transaction type: TPC-B (sort of)\n>> scaling factor: 1\n>> query mode: simple\n>> number of clients: 40\n>> number of transactions per client: 100\n>> number of transactions actually processed: 4000/4000\n>> tps = 1040.534002 (including connections establishing)\n>> tps = 1065.215134 (excluding connections establishing)\n> \n> This is much faster than the theoretical limit of a single 10k RPM\n> drive obeying fsync.\n> \n> I'll ignore the rest of your post where you get 53 tps after\n> optimization. The important thing you forgot to mention was your\n> drive subsystem here. I'm gonna take a wild guess that they are both\n> on a single drive and that the older machine is using an older SATA or\n> PATA interface HD that is lying about fsync, and the new machine is\n> using a 10k RPM drive that is not lying about fsync and you are\n> getting a proper ~150 tps from it.\n> \n> So, what kind of IO subsystems you got in those things?\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n",
"msg_date": "Sat, 25 Aug 2012 14:53:28 +0200",
"msg_from": "Felix Schubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Performance on a XEON E5504"
},
{
"msg_contents": "On Sat, Aug 25, 2012 at 6:53 AM, Felix Schubert <[email protected]> wrote:\n> Hi Scott,\n>\n> the controller is a HP i410 running 3x300GB SAS 15K / Raid 5\n\nWell it sounds like it does NOT have a battery back caching module on\nit, am I right?\n\n",
"msg_date": "Sat, 25 Aug 2012 06:59:45 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Performance on a XEON E5504"
},
{
"msg_contents": "On Sat, Aug 25, 2012 at 6:59 AM, Scott Marlowe <[email protected]> wrote:\n> On Sat, Aug 25, 2012 at 6:53 AM, Felix Schubert <[email protected]> wrote:\n>> Hi Scott,\n>>\n>> the controller is a HP i410 running 3x300GB SAS 15K / Raid 5\n>\n> Well it sounds like it does NOT have a battery back caching module on\n> it, am I right?\n\nAlso what software did you use to benchmark your drive subsystem?\nBonnie++ is a good place to start. There are better suites out there\nbut it's been a while for me since I've used them.\n\nAlso note the HP i410 is not the fastest RAID controller ever, but it\nshould be faster than this if it has a battery backed cache on it\nwhich will allow write-back operation. Without it the controller will\ndefault to write-through, which is much slower.\n\n",
"msg_date": "Sat, 25 Aug 2012 07:04:51 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Performance on a XEON E5504"
},
{
"msg_contents": "Don't know but I forwarded the question to the System Administrator. \n\nAnyhow thanks for the information up to now!\n\nbest regards,\n\nFelix \n\nAm 25.08.2012 um 14:59 schrieb Scott Marlowe <[email protected]>:\n\n> Well it sounds like it does NOT have a battery back caching module on\n> it, am I right?\n\n\nDon't know but I forwarded the question to the System Administrator. Anyhow thanks for the information up to now!\nbest regards,Felix \n\nAm 25.08.2012 um 14:59 schrieb Scott Marlowe <[email protected]>:Well it sounds like it does NOT have a battery back caching module onit, am I right?",
"msg_date": "Sat, 25 Aug 2012 23:26:11 +0200",
"msg_from": "Felix Schubert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Performance on a XEON E5504"
},
{
"msg_contents": "No problem, hope it helps. The single most important part of any\nfast, transactional server is the RAID controller and its cache.\n\nOn Sat, Aug 25, 2012 at 3:26 PM, Felix Schubert <[email protected]> wrote:\n> Don't know but I forwarded the question to the System Administrator.\n>\n> Anyhow thanks for the information up to now!\n>\n> best regards,\n>\n> Felix\n>\n> Am 25.08.2012 um 14:59 schrieb Scott Marlowe <[email protected]>:\n>\n> Well it sounds like it does NOT have a battery back caching module on\n> it, am I right?\n>\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n",
"msg_date": "Sat, 25 Aug 2012 15:47:04 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Performance on a XEON E5504"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have a table which its TOAST table size is 66 GB, and we believe should be smaller.\nThe table size is 472 kb. And the table has 4 columns that only one of them should be toasted.\n\nThe table has only 8 dead tuples, so apparently this is not the problem.\n\nThis table contains a column with bytea type data (kept as TOAST). We tried to check what is the size of the toasted data in each row by using the following query (the data_blob is the bytea column):\n\nSELECT nid, octet_length(data_blob) FROM my_table ORDER BY octet_length(data_blob) DESC;\n\nThis result contain 1782 rows. The sizes I get from each row are between 35428 to 42084.\n\n1782 * 38000 = 67716000 byte = 64.579 MB .\n\nWhat can be the reason for a table size of 66 GB? What else should I check?\n\nThanks in advance,\nLiron\n\nHi, We have a table which its TOAST table size is 66 GB, and we believe should be smaller.The table size is 472 kb. And the table has 4 columns that only one of them should be toasted. The table has only 8 dead tuples, so apparently this is not the problem. This table contains a column with bytea type data (kept as TOAST). We tried to check what is the size of the toasted data in each row by using the following query (the data_blob is the bytea column): SELECT nid, octet_length(data_blob) FROM my_table ORDER BY octet_length(data_blob) DESC; This result contain 1782 rows. The sizes I get from each row are between 35428 to 42084. 1782 * 38000 = 67716000 byte = 64.579 MB . What can be the reason for a table size of 66 GB? What else should I check? Thanks in advance,Liron",
"msg_date": "Sun, 26 Aug 2012 15:46:31 +0300",
"msg_from": "Liron Shiri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Investigating the reason for a very big TOAST table size"
},
{
"msg_contents": "On Sun, Aug 26, 2012 at 5:46 AM, Liron Shiri <[email protected]> wrote:\n> Hi,\n>\n>\n>\n> We have a table which its TOAST table size is 66 GB, and we believe should\n> be smaller.\n>\n> The table size is 472 kb. And the table has 4 columns that only one of them\n> should be toasted.\n>\n>\n>\n> The table has only 8 dead tuples, so apparently this is not the problem.\n>\n>\n>\n> This table contains a column with bytea type data (kept as TOAST). We tried\n> to check what is the size of the toasted data in each row by using the\n> following query (the data_blob is the bytea column):\n>\n>\n>\n> SELECT nid, octet_length(data_blob) FROM my_table ORDER BY\n> octet_length(data_blob) DESC;\n>\n>\n>\n> This result contain 1782 rows. The sizes I get from each row are between\n> 35428 to 42084.\n>\n>\n>\n> 1782 * 38000 = 67716000 byte = 64.579 MB .\n>\n>\n>\n> What can be the reason for a table size of 66 GB? What else should I check?\n\nIs the size of the database continuing to grow over time, or is it stable?\n\nHave you done a hot-standby promotion on this database, perchance? I\nhave an open bug report on an unusual situation that began after that:\nhttp://archives.postgresql.org/pgsql-bugs/2012-08/msg00108.php\n\n\n-- \nfdr\n\n",
"msg_date": "Mon, 27 Aug 2012 09:42:19 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Investigating the reason for a very big TOAST table size"
},
{
"msg_contents": "There were no \"hot standby\" configuration, but the DB has start grow fast after restoring from a base backup as described in http://www.postgresql.org/docs/8.3/static/continuous-archiving.html#BACKUP-BASE-BACKUP \n\nThe DB has been growing for a while, and now it seems to become stable after adjusting the autovacuum cost parameters to be more aggressive.\n\nThe DB version is 8.3.7.\n\nDo you think it might be the same issue?\nWhat can we do in order to decrease DB size?\n\n-----Original Message-----\nFrom: Daniel Farina [mailto:[email protected]] \nSent: Monday, August 27, 2012 7:42 PM\nTo: Liron Shiri\nCc: [email protected]\nSubject: Re: [PERFORM] Investigating the reason for a very big TOAST table size\n\nOn Sun, Aug 26, 2012 at 5:46 AM, Liron Shiri <[email protected]> wrote:\n> Hi,\n>\n>\n>\n> We have a table which its TOAST table size is 66 GB, and we believe \n> should be smaller.\n>\n> The table size is 472 kb. And the table has 4 columns that only one of \n> them should be toasted.\n>\n>\n>\n> The table has only 8 dead tuples, so apparently this is not the problem.\n>\n>\n>\n> This table contains a column with bytea type data (kept as TOAST). We \n> tried to check what is the size of the toasted data in each row by \n> using the following query (the data_blob is the bytea column):\n>\n>\n>\n> SELECT nid, octet_length(data_blob) FROM my_table ORDER BY\n> octet_length(data_blob) DESC;\n>\n>\n>\n> This result contain 1782 rows. The sizes I get from each row are \n> between\n> 35428 to 42084.\n>\n>\n>\n> 1782 * 38000 = 67716000 byte = 64.579 MB .\n>\n>\n>\n> What can be the reason for a table size of 66 GB? What else should I check?\n\nIs the size of the database continuing to grow over time, or is it stable?\n\nHave you done a hot-standby promotion on this database, perchance? I have an open bug report on an unusual situation that began after that:\nhttp://archives.postgresql.org/pgsql-bugs/2012-08/msg00108.php\n\n\n--\nfdr\n\nScanned by Check Point Total Security Gateway.\n\n",
"msg_date": "Tue, 28 Aug 2012 09:24:15 +0300",
"msg_from": "Liron Shiri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Investigating the reason for a very big TOAST table\n size"
},
{
"msg_contents": "On Mon, Aug 27, 2012 at 11:24 PM, Liron Shiri <[email protected]> wrote:\n> There were no \"hot standby\" configuration, but the DB has start grow fast after restoring from a base backup as described in http://www.postgresql.org/docs/8.3/static/continuous-archiving.html#BACKUP-BASE-BACKUP\n\nVery interesting. That is more or less the same concept, but it might\neliminate some variables. What's your workload on the bloaty toast\ntable? Mine is per the bug report, which is repeated concatenation of\nstrings.\n\n> The DB has been growing for a while, and now it seems to become stable after adjusting the autovacuum cost parameters to be more aggressive.\n\nMy database has taken many days (over a week) to stabilize. I was\nabout to write that it never stops growing (we'd eventually have to\nVACUUM FULL or do a column rotation), but that is not true. This\ngraph is a bit spotty for unrelated reasons, but here's something like\nwhat I'm seeing:\n\nhttp://i.imgur.com/tbj1n.png\n\nThe standby promotion sticks out quite a bit. I wonder if the\noriginal huge size is not the result of a huge delete (which I\nsurmised) but rather another standby promotion. We tend to do that a\nlot here.\n\n> The DB version is 8.3.7.\n>\n> Do you think it might be the same issue?\n> What can we do in order to decrease DB size?\n\nOne weakness of Postgres is can't really debloat online or\nincrementally yet, but luckily your table is quite small: you can use\n\"CLUSTER\" to lock and re-write the table, which will then be small.\nDo not use VACUUM FULL on this old release, but for future reference,\nVACUUM FULL has been made more like CLUSTER in newer releases anyway,\nand one can use that in the future. Both of these do table rewrites\nof the live data\n\n-- \nfdr\n\n",
"msg_date": "Tue, 28 Aug 2012 01:57:58 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Investigating the reason for a very big TOAST table size"
},
{
"msg_contents": "On Tue, Aug 28, 2012 at 1:57 AM, Daniel Farina <[email protected]> wrote:\n> My database has taken many days (over a week) to stabilize. I was\n> about to write that it never stops growing (we'd eventually have to\n> VACUUM FULL or do a column rotation), but that is not true. This\n> graph is a bit spotty for unrelated reasons, but here's something like\n> what I'm seeing:\n>\n> http://i.imgur.com/tbj1n.png\n\nGraph in attachment form for posterity of the archives. I forgot they\ntake attachments.\n\n-- \nfdr",
"msg_date": "Tue, 28 Aug 2012 01:59:15 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Investigating the reason for a very big TOAST table size"
},
{
"msg_contents": "On Mon, Aug 27, 2012 at 11:24 PM, Liron Shiri <[email protected]> wrote:\n> There were no \"hot standby\" configuration, but the DB has start grow fast after restoring from a base backup as described in http://www.postgresql.org/docs/8.3/static/continuous-archiving.html#BACKUP-BASE-BACKUP\n\nI'm trying to confirm a theory about why this happens. Can you answer\na question for me?\n\nI've just seen this happen twice. Both are involving toasted columns,\nbut the other critical thing they share is that they use in-database\noperators to modify the toasted data.\n\nFor example, here is something that would not display pathological\nwarm/hot standby-promotion bloat, if I am correct:\n\nUPDATE foo SET field='value';\n\nBut here's something that might:\n\nUPDATE foo SET field=field || 'value'\n\nOther examples might include tsvector_update_trigger (also: that means\nthat triggers can cause this workload also, even if you do not write\nqueries that directly use such modification operators) , but in\nprinciple any operation that does not completely overwrite the value\nmay be susceptible, or so the information I have would indicate. What\ndo you think, does that sound like your workload, or do you do full\nreplacement of values in your UPDATEs, which would invalidate this\ntheory?\n\nI'm trying to figure out why standby promotion works so often with no\nproblems but sometimes bloats in an incredibly pathological way\nsometimes, and obviously I think it might be workload dependent.\n\n-- \nfdr\n\n",
"msg_date": "Thu, 30 Aug 2012 01:10:30 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Investigating the reason for a very big TOAST table size"
},
{
"msg_contents": "We do not use in-database operators to modify the toasted data.\nThe update operations we perform on the problematic table are in the form of \n\nUPDATE foo SET field='value' WHERE nid = to_uid(#objId#)\n\n-----Original Message-----\nFrom: Daniel Farina [mailto:[email protected]] \nSent: Thursday, August 30, 2012 11:11 AM\nTo: Liron Shiri\nCc: [email protected]\nSubject: Re: [PERFORM] Investigating the reason for a very big TOAST table size\n\nOn Mon, Aug 27, 2012 at 11:24 PM, Liron Shiri <[email protected]> wrote:\n> There were no \"hot standby\" configuration, but the DB has start grow \n> fast after restoring from a base backup as described in \n> http://www.postgresql.org/docs/8.3/static/continuous-archiving.html#BA\n> CKUP-BASE-BACKUP\n\nI'm trying to confirm a theory about why this happens. Can you answer a question for me?\n\nI've just seen this happen twice. Both are involving toasted columns, but the other critical thing they share is that they use in-database operators to modify the toasted data.\n\nFor example, here is something that would not display pathological warm/hot standby-promotion bloat, if I am correct:\n\nUPDATE foo SET field='value';\n\nBut here's something that might:\n\nUPDATE foo SET field=field || 'value'\n\nOther examples might include tsvector_update_trigger (also: that means that triggers can cause this workload also, even if you do not write queries that directly use such modification operators) , but in principle any operation that does not completely overwrite the value may be susceptible, or so the information I have would indicate. What do you think, does that sound like your workload, or do you do full replacement of values in your UPDATEs, which would invalidate this theory?\n\nI'm trying to figure out why standby promotion works so often with no problems but sometimes bloats in an incredibly pathological way sometimes, and obviously I think it might be workload dependent.\n\n--\nfdr\n\nScanned by Check Point Total Security Gateway.\n\n",
"msg_date": "Thu, 30 Aug 2012 11:34:48 +0300",
"msg_from": "Liron Shiri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Investigating the reason for a very big TOAST table\n size"
},
{
"msg_contents": "On Thu, Aug 30, 2012 at 1:34 AM, Liron Shiri <[email protected]> wrote:\n> We do not use in-database operators to modify the toasted data.\n> The update operations we perform on the problematic table are in the form of\n>\n> UPDATE foo SET field='value' WHERE nid = to_uid(#objId#)\n\nAh, well, there goes that idea, although it may still be good enough\nto reproduce the problem, even if it is not responsible for all\nreproductions...\n\nI guess it's time to clear some time to try.\n\n-- \nfdr\n\n",
"msg_date": "Thu, 30 Aug 2012 01:39:30 -0700",
"msg_from": "Daniel Farina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Investigating the reason for a very big TOAST table size"
}
] |
[
{
"msg_contents": "From: Liron Shiri\nSent: Sunday, August 26, 2012 3:47 PM\nTo: '[email protected]'\nSubject: Investigating the reason for a very big TOAST table size\nImportance: High\n\nHi,\n\nWe have a table which its TOAST table size is 66 GB, and we believe should be smaller.\nThe table size is 472 kb. And the table has 4 columns that only one of them should be toasted.\n\nThe table has only 8 dead tuples, so apparently this is not the problem.\n\nThis table contains a column with bytea type data (kept as TOAST). We tried to check what is the size of the toasted data in each row by using the following query (the data_blob is the bytea column):\n\nSELECT nid, octet_length(data_blob) FROM my_table ORDER BY octet_length(data_blob) DESC;\n\nThis result contain 1782 rows. The sizes I get from each row are between 35428 to 42084.\n\n1782 * 38000 = 67716000 byte = 64.579 MB .\n\nWhat can be the reason for a table size of 66 GB? What else should I check?\n\nThanks in advance,\nLiron\n\n From: Liron Shiri Sent: Sunday, August 26, 2012 3:47 PMTo: '[email protected]'Subject: Investigating the reason for a very big TOAST table sizeImportance: High Hi, We have a table which its TOAST table size is 66 GB, and we believe should be smaller.The table size is 472 kb. And the table has 4 columns that only one of them should be toasted. The table has only 8 dead tuples, so apparently this is not the problem. This table contains a column with bytea type data (kept as TOAST). We tried to check what is the size of the toasted data in each row by using the following query (the data_blob is the bytea column): SELECT nid, octet_length(data_blob) FROM my_table ORDER BY octet_length(data_blob) DESC; This result contain 1782 rows. The sizes I get from each row are between 35428 to 42084. 1782 * 38000 = 67716000 byte = 64.579 MB . What can be the reason for a table size of 66 GB? What else should I check? Thanks in advance,Liron",
"msg_date": "Sun, 26 Aug 2012 15:51:24 +0300",
"msg_from": "Liron Shiri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Investigating the reason for a very big TOAST table size"
}
] |
[
{
"msg_contents": "Hello all,\n\nI have a plpgsql function that takes a few seconds (less than 5) when \nexecuted from psql. The same function, when invoked from java via a \nprepared statement takes a few minutes. There are a few queries in the \nfunction. Out of these, the first query takes input parameters for \nfiltering the data. It is this query which takes a long time when the \nprocedure is invoked from java. To ensure that the query does use actual \nvalues (and not bind variables) for optimization, we used \n\nexecute\n'\nselect x.col_type_desc,x.acc_id,acc_svr from (.....\n' \nusing d_from_date,d_to_date\n\nIt did not help. Any suggestions? It is from_date and to_date on which \ndata gets filtered. We are using the same values for filtering, when we \nexecute it from java/psql\n\nRegards,\nJayadevan \n\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello all,\n\nI have a plpgsql function that takes\na few seconds (less than 5) when executed from psql. The same function,\nwhen invoked from java via a prepared statement takes a few minutes. There\nare a few queries in the function. Out of these, the first query takes\ninput parameters for filtering the data. It is this query which takes a\nlong time when the procedure is invoked from java. To ensure that the query\ndoes use actual values (and not bind variables) for optimization, we used\n\n\nexecute\n'\nselect x.col_type_desc,x.acc_id,acc_svr\nfrom (.....\n' \nusing d_from_date,d_to_date\n\nIt did not help. Any suggestions? It\nis from_date and to_date on which data gets filtered. We are using the\nsame values for filtering, when we execute it from java/psql\n\nRegards,\nJayadevan \n\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Mon, 27 Aug 2012 18:07:32 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Execution from java - slow"
},
{
"msg_contents": "Jayadevan M wrote:\n> I have a plpgsql function that takes a few seconds (less than 5) when\nexecuted from psql. The same\n> function, when invoked from java via a prepared statement takes a few\nminutes. There are a few queries\n> in the function. Out of these, the first query takes input parameters\nfor filtering the data. It is\n> this query which takes a long time when the procedure is invoked from\njava. To ensure that the query\n> does use actual values (and not bind variables) for optimization, we\nused\n> \n> execute\n> '\n> select x.col_type_desc,x.acc_id,acc_svr from (.....\n> '\n> using d_from_date,d_to_date\n> \n> It did not help. Any suggestions? It is from_date and to_date on which\ndata gets filtered. We are\n> using the same values for filtering, when we execute it from java/psql\n\nUse the auto_explain contrib with\nauto_explain.log_nested_statements=on\nto see the statements that are really executed\nand compare!\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Mon, 27 Aug 2012 14:47:12 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Execution from java - slow"
},
{
"msg_contents": "On Mon, Aug 27, 2012 at 6:07 PM, Jayadevan M\n<[email protected]>wrote:\n\n> Hello all,\n>\n> I have a plpgsql function that takes a few seconds (less than 5) when\n> executed from psql. The same function, when invoked from java via a\n> prepared statement takes a few minutes. There are a few queries in the\n> function. Out of these, the first query takes input parameters for\n> filtering the data. It is this query which takes a long time when the\n> procedure is invoked from java. To ensure that the query does use actual\n> values (and not bind variables) for optimization, we used\n>\n> execute\n> '\n> select x.col_type_desc,x.acc_id,acc_svr from (.....\n> '\n> using d_from_date,d_to_date\n>\n> It did not help. Any suggestions? It is from_date and to_date on which\n> data gets filtered. We are using the same values for filtering, when we\n> execute it from java/psql\n>\n>\nIt looks highly unlikely that a function execution will take more time\nthrough different client interfaces. May be you want to log the function\ninput parameters and see if they are coming different through these\ninterfaces (I think you can use RAISE NOTICE for that). I'm not sure but\nclient side encoding might also cause changes in the real values of the\ndate parameters you are passing (e.g mm/dd/yy vs dd/mm/yy). So that will be\nworth checking as well.\n\nThanks,\nPavan\n\nOn Mon, Aug 27, 2012 at 6:07 PM, Jayadevan M <[email protected]> wrote:\nHello all,\n\nI have a plpgsql function that takes\na few seconds (less than 5) when executed from psql. The same function,\nwhen invoked from java via a prepared statement takes a few minutes. There\nare a few queries in the function. Out of these, the first query takes\ninput parameters for filtering the data. It is this query which takes a\nlong time when the procedure is invoked from java. To ensure that the query\ndoes use actual values (and not bind variables) for optimization, we used\n\n\nexecute\n'\nselect x.col_type_desc,x.acc_id,acc_svr\nfrom (.....\n' \nusing d_from_date,d_to_date\n\nIt did not help. Any suggestions? It\nis from_date and to_date on which data gets filtered. We are using the\nsame values for filtering, when we execute it from java/psql\n\nIt looks highly unlikely that a function execution will take more time through different client interfaces. May be you want to log the function input parameters and see if they are coming different through these interfaces (I think you can use RAISE NOTICE for that). I'm not sure but client side encoding might also cause changes in the real values of the date parameters you are passing (e.g mm/dd/yy vs dd/mm/yy). So that will be worth checking as well.\nThanks,Pavan",
"msg_date": "Tue, 28 Aug 2012 12:41:16 +0530",
"msg_from": "Pavan Deolasee <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Execution from java - slow"
},
{
"msg_contents": "On Tue, Aug 28, 2012 at 2:11 AM, Pavan Deolasee\n<[email protected]> wrote:\n>\n>\n> On Mon, Aug 27, 2012 at 6:07 PM, Jayadevan M <[email protected]>\n> wrote:\n>>\n>> Hello all,\n>>\n>> I have a plpgsql function that takes a few seconds (less than 5) when\n>> executed from psql. The same function, when invoked from java via a\n>> prepared statement takes a few minutes. There are a few queries in the\n>> function. Out of these, the first query takes input parameters for filtering\n>> the data. It is this query which takes a long time when the procedure is\n>> invoked from java. To ensure that the query does use actual values (and not\n>> bind variables) for optimization, we used\n>>\n>> execute\n>> '\n>> select x.col_type_desc,x.acc_id,acc_svr from (.....\n>> '\n>> using d_from_date,d_to_date\n>>\n>> It did not help. Any suggestions? It is from_date and to_date on which\n>> data gets filtered. We are using the same values for filtering, when we\n>> execute it from java/psql\n>>\n>\n> It looks highly unlikely that a function execution will take more time\n> through different client interfaces. May be you want to log the function\n> input parameters and see if they are coming different through these\n> interfaces (I think you can use RAISE NOTICE for that). I'm not sure but\n> client side encoding might also cause changes in the real values of the date\n> parameters you are passing (e.g mm/dd/yy vs dd/mm/yy). So that will be worth\n> checking as well.\n\n\nYeah. well, hm. Is the function returning a whole bunch of data?\nAlso, try confirming the slow runtime from the server's point of view;\nlog_min_duration_statement is a good setting for that.\n\nmerlin\n\n",
"msg_date": "Tue, 28 Aug 2012 08:32:15 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Execution from java - slow"
},
{
"msg_contents": "Hi,\n\n> \n> \n> Yeah. well, hm. Is the function returning a whole bunch of data?\n> Also, try confirming the slow runtime from the server's point of view;\n> log_min_duration_statement is a good setting for that.\n> \nI did try those options. In the end, removing an order by (it was not \nnecessary) from the SELECT solved the problem. But why the behavior was \ndifferent when executed from psql and java is still a mystery.\n\nThanks a lot for the suggestions.\nRegards,\nJayadevan\n\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHi,\n\n> \n> \n> Yeah. well, hm. Is the function returning a whole bunch of data?\n> Also, try confirming the slow runtime from the server's point of view;\n> log_min_duration_statement is a good setting for that.\n> \nI did try those options. In the end, removing an order by (it was not necessary)\nfrom the SELECT solved the problem. But why the behavior was different\nwhen executed from psql and java is still a mystery.\n\nThanks a lot for the suggestions.\nRegards,\nJayadevan\n\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Mon, 3 Sep 2012 14:05:27 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Execution from java - slow"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have been using the current version of postgres i.e. 9.1.4 with\nstreaming replication on. While vacuuming we noticed that certain dead rows\nare not getting removed and following debug information is printed:\n\n\"DETAIL: 12560 dead row versions cannot be removed yet.\"\n\nAs per suggestion, we made sure that no long running transactions are\nactive. Also all the applications were stopped during this time.\n\nCan anybody highlight the possible reason for the dead rows not been\ncleaned?\n\nFYI: We used the command VACUUM FULL ANALYZE VERBOSE table_name; command.\n\nRegards,\nNimesh.\n\nHi,We have been using the current version of postgres i.e. 9.1.4 with streaming replication on. While vacuuming we noticed that certain dead rows are not getting removed and following debug information is printed:\n\"DETAIL: 12560 dead row versions cannot be removed yet.\"As per suggestion, we made sure that no long running transactions are active. Also all the applications were stopped during this time. \nCan anybody highlight the possible reason for the dead rows not been cleaned?FYI: We used the command VACUUM FULL ANALYZE VERBOSE table_name; command.\nRegards,Nimesh.",
"msg_date": "Tue, 28 Aug 2012 10:03:00 +0530",
"msg_from": "Nimesh Satam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum problems with 9.1"
},
{
"msg_contents": "On Tue, Aug 28, 2012 at 10:03 AM, Nimesh Satam <[email protected]>wrote:\n\n> Hi,\n>\n> We have been using the current version of postgres i.e. 9.1.4 with\n> streaming replication on. While vacuuming we noticed that certain dead rows\n> are not getting removed and following debug information is printed:\n>\n> \"DETAIL: 12560 dead row versions cannot be removed yet.\"\n>\n> As per suggestion, we made sure that no long running transactions are\n> active. Also all the applications were stopped during this time.\n>\n> Can anybody highlight the possible reason for the dead rows not been\n> cleaned?\n>\n>\nAre you absolutely sure that there are no other client connections open\nwhich are actively deleting/updating records ? The above message would\nusually come when certain rows which are otherwise DEAD (meaning, deleting\nor updating transaction has already committed) but can't be removed just\nyet because there is at least one old transaction that may still see the\ntuple as visible. If there are no open transactions, then I can only think\nabout a concurrent auto-analyze running that can prevent some tuples from\nbeing vacuumed.\n\nWhat happens if you run the command again ? Do you get the exact same\nnumber again ?\n\nAlso note that any concurrent transaction can cause this, even if the\ntransaction does not access the table under vacuum operation.\n\n\n> FYI: We used the command VACUUM FULL ANALYZE VERBOSE table_name; command.\n>\n>\nI hope you are aware that VACUUM FULL is a costly operation because it\nrewrites the entire table again. You need VACUUM FULL only in cases of\nsevere bloat. Otherwise a plain VACUUM (or auto-vacuum) should be enough to\nhandle regular bloat.\n\nThanks,\nPavan\n\nOn Tue, Aug 28, 2012 at 10:03 AM, Nimesh Satam <[email protected]> wrote:\n\nHi,We have been using the current version of postgres i.e. 9.1.4 with streaming replication on. While vacuuming we noticed that certain dead rows are not getting removed and following debug information is printed:\n\"DETAIL: 12560 dead row versions cannot be removed yet.\"As per suggestion, we made sure that no long running transactions are active. Also all the applications were stopped during this time. \nCan anybody highlight the possible reason for the dead rows not been cleaned?Are you absolutely sure that there are no other client connections open which are actively deleting/updating records ? The above message would usually come when certain rows which are otherwise DEAD (meaning, deleting or updating transaction has already committed) but can't be removed just yet because there is at least one old transaction that may still see the tuple as visible. If there are no open transactions, then I can only think about a concurrent auto-analyze running that can prevent some tuples from being vacuumed.\nWhat happens if you run the command again ? Do you get the exact same number again ?Also note that any concurrent transaction can cause this, even if the transaction does not access the table under vacuum operation.\n FYI: We used the command VACUUM FULL ANALYZE VERBOSE table_name; command.\nI hope you are aware that VACUUM FULL is a costly operation because it rewrites the entire table again. You need VACUUM FULL only in cases of severe bloat. Otherwise a plain VACUUM (or auto-vacuum) should be enough to handle regular bloat.\nThanks,Pavan",
"msg_date": "Tue, 28 Aug 2012 12:03:13 +0530",
"msg_from": "Pavan Deolasee <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum problems with 9.1"
},
{
"msg_contents": "On 08/28/2012 12:33 PM, Nimesh Satam wrote:\n> Hi,\n>\n> We have been using the current version of postgres i.e. 9.1.4 with\n> streaming replication on. While vacuuming we noticed that certain dead\n> rows are not getting removed and following debug information is printed:\n>\n> \"DETAIL: 12560 dead row versions cannot be removed yet.\"\n>\n> As per suggestion, we made sure that no long running transactions are\n> active. Also all the applications were stopped during this time.\n>\n> Can anybody highlight the possible reason for the dead rows not been\n> cleaned?\n\nI don't know if prepared transactions could cause this exact message, \nbut check:\n\n select * from pg_prepared_xacts ;\n\nto see if you have any prepared transactions (from two-phase commit) \nlying around.\n\nIf you don't use XA or 2PC, consider setting max_prepared_transactions \nto 0 in postgresql.conf if it isn't already.\n\n--\nCraig Ringer\n\n",
"msg_date": "Tue, 28 Aug 2012 14:53:54 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum problems with 9.1"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've been trying to apply pg_tgrm for the search-function of my\napplication. The database fits a few times in the available RAM, and is\nmostly read-only.\nPlans, schema and configs in attachment. Postgresql version 9.1.4 on Debian.\n\nWhen just searching in one table, it behaves perfectly here. When I put\nconstraints on multiple connected tables (performance and performer), it\ntakes some bad decisions. Somehow the planner thinks that an index scan on\na trigram index (on a string) is as fast as an index scan on a btree of an\nint. Because of that, it will combine both index scans into an \"AND\" bitmap\nindex scan. Since this is done in a nested loop, the performance gets very\nbad. The trigram index scan should not be repeated as it is relatively slow\nand always the same query.\n\nWhen I disable bitmap scans, it will search on both tables and then hash\neverything together. This avoids launching the same index scan over and\nover again. This is much faster.\n\nSince my database is mostly in memory, I guess I could safely disable\nbitmap scan (or at least for some query), since I understand that this kind\nof scan is often a way to have a better IO performance. There's little IO\nin my setup.\nHowever, I'd rather get some help in fixing it right!\n\nThanks,\n\nMathieu",
"msg_date": "Tue, 28 Aug 2012 09:39:26 +0200",
"msg_from": "Mathieu De Zutter <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_trgm and slow bitmap index scan plan"
},
{
"msg_contents": "On Tue, Aug 28, 2012 at 2:39 AM, Mathieu De Zutter <[email protected]> wrote:\n> Hi all,\n>\n> I've been trying to apply pg_tgrm for the search-function of my application.\n> The database fits a few times in the available RAM, and is mostly read-only.\n> Plans, schema and configs in attachment. Postgresql version 9.1.4 on Debian.\n>\n> When just searching in one table, it behaves perfectly here. When I put\n> constraints on multiple connected tables (performance and performer), it\n> takes some bad decisions. Somehow the planner thinks that an index scan on a\n> trigram index (on a string) is as fast as an index scan on a btree of an\n> int. Because of that, it will combine both index scans into an \"AND\" bitmap\n> index scan. Since this is done in a nested loop, the performance gets very\n> bad. The trigram index scan should not be repeated as it is relatively slow\n> and always the same query.\n>\n> When I disable bitmap scans, it will search on both tables and then hash\n> everything together. This avoids launching the same index scan over and over\n> again. This is much faster.\n>\n> Since my database is mostly in memory, I guess I could safely disable bitmap\n> scan (or at least for some query), since I understand that this kind of scan\n> is often a way to have a better IO performance. There's little IO in my\n> setup.\n> However, I'd rather get some help in fixing it right!\n\nYeah -- gist_trgm_ops is expensive and the planner is not taking that\ninto account. I wonder if operator classes (pg_opclass) should have a\nplanner influencing costing component.\n\nmerlin\n\n",
"msg_date": "Tue, 28 Aug 2012 08:27:24 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_trgm and slow bitmap index scan plan"
}
] |
[
{
"msg_contents": "Hi,\n\nI have written some Java code which builds a postgresql function. That function calls approximately 6 INSERT statements with a RETURNING clause. I recreate and re-run the function about 900,000 times. I use JDBC to execute these functions on postgresql 8.3 on Windows. When I tried running this on a single Connection of Postgresql, it failed (some kind of memory error). So I split the JDBC connections up into chunks of 5000. I reran and everything was fine. It took about 1 hour to execute all the updates.\n\nSince it took so long to perform the update, I wanted to prevent other users from querying the data during that time. So I read about the LOCK command. It seemed like I should LOCK all the tables in the database with an ACCESS EXCLUSIVE mode. That would prevent anyone from getting data while the database was making its updates.\n\nSince a LOCK is only valid for 1 transaction, I set autocommit to FALSE. I also removed the code which chunked up the inserts. I had read that a single transaction ought to have better performance than committing after each insert, but that was clearly not what ended up happening in my case.\n\nIn my case, a few problems occurred. Number 1, the process ran at least 8 hours and never finished. It did not finish because the hard drive was filled up. After running a manual vacuum (VACUUM FULL), no space was freed up. I think this has cost me 20 GB of space. Is there any way to free this space up? I even dropped the database to no avail.\n\nSecondly, why did this process take over 8 hours to run? While reading the performance mailing list, it seems like recommendations are to run lots of INSERTS in a single commit. Is 5 million too many? Is redefining a function over and over inside a transaction a problem? Does the RETURNING clause present a problem during a single transaction?\n\nIf anyone has any suggestions for me, I would really appreciate it.\n\nTina\n\nHi,I have written some Java code which builds a postgresql function. That function calls approximately 6 INSERT statements with a RETURNING clause. I recreate and re-run the function about 900,000 times. I use JDBC to execute these functions on postgresql 8.3 on Windows. When I tried running this on a single Connection of Postgresql, it failed (some kind of memory error). So I split the JDBC connections up into chunks of 5000. I reran and everything was fine. It took about 1 hour to execute all the updates.Since it took so long to perform the update, I wanted to prevent other users from querying the data during that time. So I read about the LOCK command. It seemed like I should LOCK all the tables in the database with an ACCESS EXCLUSIVE mode. That would prevent anyone from getting data while the database was making its updates.Since a LOCK is only valid for 1 transaction, I set autocommit to FALSE. I\n also removed the code which chunked up the inserts. I had read that a single transaction ought to have better performance than committing after each insert, but that was clearly not what ended up happening in my case.In my case, a few problems occurred. Number 1, the process ran at least 8 hours and never finished. It did not finish because the hard drive was filled up. After running a manual vacuum (VACUUM FULL), no space was freed up. I think this has cost me 20 GB of space. Is there any way to free this space up? I even dropped the database to no avail.Secondly, why did this process take over 8 hours to run? While reading the performance mailing list, it seems like recommendations are to run lots of INSERTS in a single commit. Is 5 million too many? Is redefining a function over and over inside a transaction a problem? Does the RETURNING clause present a problem during a single transaction?If anyone has any suggestions for me, I would really appreciate it.Tina",
"msg_date": "Wed, 29 Aug 2012 23:34:56 -0700 (PDT)",
"msg_from": "Eileen <[email protected]>",
"msg_from_op": true,
"msg_subject": "JDBC 5 million function insert returning Single Transaction Lock\n\tAccess Exclusive Problem"
},
{
"msg_contents": "Dave Cramer\n\ndave.cramer(at)credativ(dot)ca\nhttp://www.credativ.ca\n\n\nOn Thu, Aug 30, 2012 at 2:34 AM, Eileen <[email protected]> wrote:\n> Hi,\n>\n> I have written some Java code which builds a postgresql function. That\n> function calls approximately 6 INSERT statements with a RETURNING clause. I\n> recreate and re-run the function about 900,000 times. I use JDBC to execute\n> these functions on postgresql 8.3 on Windows. When I tried running this on\n> a single Connection of Postgresql, it failed (some kind of memory error).\n> So I split the JDBC connections up into chunks of 5000. I reran and\n> everything was fine. It took about 1 hour to execute all the updates.\n\n\n>\n> Since it took so long to perform the update, I wanted to prevent other users\n> from querying the data during that time. So I read about the LOCK command.\n> It seemed like I should LOCK all the tables in the database with an ACCESS\n> EXCLUSIVE mode. That would prevent anyone from getting data while the\n> database was making its updates.\n\nDo you understand how MVCC works? Do you really need to lock out users ?\n>\n> Since a LOCK is only valid for 1 transaction, I set autocommit to FALSE. I\n> also removed the code which chunked up the inserts. I had read that a\n> single transaction ought to have better performance than committing after\n> each insert, but that was clearly not what ended up happening in my case.\n\nWe would need more information as to what you are doing.\n>\n> In my case, a few problems occurred. Number 1, the process ran at least 8\n> hours and never finished. It did not finish because the hard drive was\n> filled up. After running a manual vacuum (VACUUM FULL), no space was freed\n> up. I think this has cost me 20 GB of space. Is there any way to free this\n> space up? I even dropped the database to no avail.\n>\n> Secondly, why did this process take over 8 hours to run? While reading the\n> performance mailing list, it seems like recommendations are to run lots of\n> INSERTS in a single commit. Is 5 million too many? Is redefining a\n> function over and over inside a transaction a problem? Does the RETURNING\n> clause present a problem during a single transaction?\n\nVACUUM FULL on 8.3 is not a good idea\n>\n> If anyone has any suggestions for me, I would really appreciate it.\n>\n\nCan you explain at a high level what you are trying to do ?\n\n> Tina\n\n",
"msg_date": "Fri, 31 Aug 2012 09:50:08 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JDBC 5 million function insert returning Single\n\tTransaction Lock Access Exclusive Problem"
},
{
"msg_contents": "Eileen wrote:\n> I have written some Java code which builds a postgresql function.\nThat function calls approximately 6\n> INSERT statements with a RETURNING clause. I recreate and re-run the\nfunction about 900,000 times. I\n> use JDBC to execute these functions on postgresql 8.3 on Windows.\nWhen I tried running this on a\n> single Connection of Postgresql, it failed (some kind of memory\nerror). So I split the JDBC\n> connections up into chunks of 5000. I reran and everything was fine.\nIt took about 1 hour to execute\n> all the updates.\n> \n> Since it took so long to perform the update, I wanted to prevent other\nusers from querying the data\n> during that time. So I read about the LOCK command. It seemed like I\nshould LOCK all the tables in\n> the database with an ACCESS EXCLUSIVE mode. That would prevent anyone\nfrom getting data while the\n> database was making its updates.\n> \n> Since a LOCK is only valid for 1 transaction, I set autocommit to\nFALSE. I also removed the code\n> which chunked up the inserts. I had read that a single transaction\nought to have better performance\n> than committing after each insert, but that was clearly not what ended\nup happening in my case.\n> \n> In my case, a few problems occurred. Number 1, the process ran at\nleast 8 hours and never finished.\n> It did not finish because the hard drive was filled up. After running\na manual vacuum (VACUUM FULL),\n> no space was freed up. I think this has cost me 20 GB of space. Is\nthere any way to free this space\n> up? I even dropped the database to no avail.\n\nTry to identify what files use the space.\nLook at the size of directories.\nCould it be that \"archive_mode\" is \"on\" and you ran out of space\nfor archived WALs?\n\nWhen you drop a database, all files that belong to the database\nare gone.\n\n> Secondly, why did this process take over 8 hours to run? While\nreading the performance mailing list,\n> it seems like recommendations are to run lots of INSERTS in a single\ncommit. Is 5 million too many?\n> Is redefining a function over and over inside a transaction a problem?\nDoes the RETURNING clause\n> present a problem during a single transaction?\n\nIt would be interesting to know how the time was spent.\nWere the CPUs busy? Were there locks?\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Fri, 31 Aug 2012 16:18:21 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JDBC 5 million function insert returning Single Transaction Lock\n\tAccess Exclusive Problem"
},
{
"msg_contents": "On Thu, Aug 30, 2012 at 12:34 AM, Eileen <[email protected]> wrote:\n> Hi,\n>\n> I have written some Java code which builds a postgresql function. That\n> function calls approximately 6 INSERT statements with a RETURNING clause. I\n> recreate and re-run the function about 900,000 times. I use JDBC to execute\n\nThat's generally a pretty inefficient way of doing things. Can you\ncreate a function that does what you want and not drop / recreate it\nover and over?\n\n",
"msg_date": "Fri, 31 Aug 2012 09:18:12 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JDBC 5 million function insert returning Single\n\tTransaction Lock Access Exclusive Problem"
},
{
"msg_contents": "On 08/30/2012 02:34 PM, Eileen wrote:\n\n> In my case, a few problems occurred. Number 1, the process ran at least\n> 8 hours and never finished.\n\nYou're on a very old version of Pg, so you're missing out on a lot of \nimprovements made since then.\n\nOne of them is, if I recall correctly, an improvement to exception \nhandling efficiency. Does your function use BEGIN ... EXCEPTION to (say) \nhandle data validation errors?\n\n--\nCraig Ringer\n\n\n",
"msg_date": "Sat, 01 Sep 2012 08:43:49 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JDBC 5 million function insert returning Single Transaction\n\tLock Access Exclusive Problem"
},
{
"msg_contents": "Thank you for your help. At a high-level, I am just updating about 900k records in the database with new information, and during that update timetable, I didn't want users to get inconsistent data.\n\nI read about the MVCC and discovered that I didn't necessarily need the LOCK statement. However, based on what I read, I thought that versions of the database would include changes to the schema. I found that not to be the case. I.e. when I queried the database while a transaction was in the process of DROPing tables, it gave me an error instead of an older snapshot. Is there any database which actually isolates schema changes? I was just curious.\n\nI have verified that while I'm DELETING rows from one session, that other sessions can retrieve the old data in a consistent state. Although, in order to actually successfully DELETE the items, I had to add an index for all my Foreign Key fields.\n\nTina\n\nFrom: Dave Cramer <[email protected]>\nTo: Eileen <[email protected]> \nCc: \"[email protected]\" <[email protected]> \nSent: Friday, August 31, 2012 6:50 AM\nSubject: Re: [PERFORM] JDBC 5 million function insert returning Single Transaction Lock Access Exclusive Problem\n\nDave Cramer\n\ndave.cramer(at)credativ(dot)ca\nhttp://www.credativ.ca\n\n\nOn Thu, Aug 30, 2012 at 2:34 AM, Eileen <[email protected]> wrote:\n> Hi,\n>\n> I have written some Java code which builds a postgresql function. That\n> function calls approximately 6 INSERT statements with a RETURNING clause. I\n> recreate and re-run the function about 900,000 times. I use JDBC to execute\n> these functions on postgresql 8.3 on Windows. When I tried running this on\n> a single Connection of Postgresql, it failed (some kind of memory error).\n> So I split the JDBC connections up into chunks of 5000. I reran and\n> everything was fine. It took about 1 hour to execute all the updates.\n\n\n>\n> Since it took so long to perform the update, I wanted to prevent other users\n> from querying the data during that time. So I read about the LOCK command.\n> It seemed like I should LOCK all the tables in the database with an ACCESS\n> EXCLUSIVE mode. That would prevent anyone from getting data while the\n> database was making its updates.\n\nDo you understand how MVCC works? Do you really need to lock out users ?\n>\n> Since a LOCK is only valid for 1 transaction, I set autocommit to FALSE. I\n> also removed the code which chunked up the inserts. I had read that a\n> single transaction ought to have better performance than committing after\n> each insert, but that was clearly not what ended up happening in my case.\n\nWe would need more information as to what you are doing.\n>\n> In my case, a few problems occurred. Number 1, the process ran at least 8\n> hours and never finished. It did not finish because the hard drive was\n> filled up. After running a manual vacuum (VACUUM FULL), no space was freed\n> up. I think this has cost me 20 GB of space. Is there any way to free this\n> space up? I even dropped the database to no avail.\n>\n> Secondly, why did this process take over 8 hours to run? While reading the\n> performance mailing list, it seems like recommendations are to run lots of\n> INSERTS in a single commit. Is 5 million too many? Is redefining a\n> function over and over inside a transaction a problem? Does the RETURNING\n> clause present a problem during a single transaction?\n\nVACUUM FULL on 8.3 is not a good idea\n>\n> If anyone has any suggestions for me, I would really appreciate it.\n>\n\nCan you explain at a high level what you are trying to do ?\n\n> Tina\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n \nThank you for your help. At a high-level, I am just updating about 900k records in the database with new information, and during that update timetable, I didn't want users to get inconsistent data.I read about the MVCC and\n discovered that I didn't necessarily need the LOCK statement. However, based on what I read, I thought that versions of the database would include changes to the schema. I found that not to be the case. I.e. when I queried the database while a transaction was in the process of DROPing tables, it gave me an error instead of an older snapshot. Is there any database which actually isolates schema changes? I was just curious.I have verified that while\n I'm DELETING rows from one session, that other sessions can retrieve the old data in a consistent state. Although, in order to actually successfully DELETE the items, I had to add an index for all my Foreign Key fields.Tina From: Dave Cramer <[email protected]> To: Eileen <[email protected]> Cc: \"[email protected]\" <[email protected]> Sent: Friday, August 31, 2012 6:50 AM Subject: Re: [PERFORM] JDBC 5 million function insert returning Single Transaction Lock Access Exclusive Problem \nDave Cramerdave.cramer(at)credativ(dot)cahttp://www.credativ.caOn Thu, Aug 30, 2012 at 2:34 AM, Eileen <[email protected]> wrote:> Hi,>> I have written some Java code which builds a postgresql function. That> function calls approximately 6 INSERT statements with a RETURNING clause. I> recreate and re-run the function about 900,000 times. I use JDBC to execute> these functions on postgresql 8.3 on Windows. When I tried running this on> a single Connection of Postgresql, it failed (some kind of memory error).> So I split the JDBC connections up into chunks of 5000. I reran and> everything was fine. It took about 1 hour to execute all the updates.>> Since it took so long to perform\n the update, I wanted to prevent other users> from querying the data during that time. So I read about the LOCK command.> It seemed like I should LOCK all the tables in the database with an ACCESS> EXCLUSIVE mode. That would prevent anyone from getting data while the> database was making its updates.Do you understand how MVCC works? Do you really need to lock out users ?>> Since a LOCK is only valid for 1 transaction, I set autocommit to FALSE. I> also removed the code which chunked up the inserts. I had read that a> single transaction ought to have better performance than committing after> each insert, but that was clearly not what ended up happening in my case.We would need more information as to what you are doing.>> In my case, a few problems occurred. Number 1, the process ran at least 8> hours and never finished. \n It did not finish because the hard drive was> filled up. After running a manual vacuum (VACUUM FULL), no space was freed> up. I think this has cost me 20 GB of space. Is there any way to free this> space up? I even dropped the database to no avail.>> Secondly, why did this process take over 8 hours to run? While reading the> performance mailing list, it seems like recommendations are to run lots of> INSERTS in a single commit. Is 5 million too many? Is redefining a> function over and over inside a transaction a problem? Does the RETURNING> clause present a problem during a single transaction?VACUUM FULL on 8.3 is not a good idea>> If anyone has any suggestions for me, I would really appreciate it.>Can you explain at a high level what you are trying to do ?> Tina-- Sent via\n pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 4 Sep 2012 16:34:38 -0700",
"msg_from": "Eileen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: JDBC 5 million function insert returning Single Transaction Lock\n\tAccess Exclusive Problem"
}
] |
[
{
"msg_contents": "Hello PG Performance group,\n\nI am doing some runtime experiments in my implementation, which is computing multi-modal range queries for a query point (if you want to know details check the website: www.isochrones.inf.unibz.it).\nThe network is explored using Dijkstra Shortest Path algorithm that starts from the query point and starts to explore all connected vertices until the time is exceeded.\nThe network is stored on a postgres (postgis) database consisting of vertices and edges.\nrelation: edge(id int, source int, target int, length double, segment geometry,..) \n\nI have different implementations how the network is loaded in main memory:\napproach 1: loads initially the entire edge table (full table scan) in main memory and then starts to expand the network and doing some computation.\napproach 2: loads only the adjacent edges of the current expanded vertex\napproach 3: loads junks using the euclidean distance upper bound \n\nI have different datasets: 6000 tuples (small), 4,000,000 tuples (large)\n\nI repeat each experiment at least 10 times.\nWhen observing the runtime I realized following:\n- in the first iteration approach 1 takes long time, and its runtime starts to perform better after each iteration:\n e.g. with large dataset\n\t- iteration 1: 60.0s\n\t- iteration 2: 40.7s\n\t- iteration 3: 40,s\n\t- iteration 4: 39.7s\n\t- iteration 5: 39.5s\n\t- iteration 6: 39.3s\n\t- iteration 7: 40.0s\n\t- iteration 8: 34.8s\n\t- iteration 9: 39.1s\n\t- iteration 10: 38.0s\n\nIn the other approaches I do not see that big difference.\n\nI know that postgres (and OS) is caching that dataset. But is there a way to force the database to remove that values from the cache?\nI also tried to perform after each iteration a scan on a dummy table (executing it at least 10 times to force the optimized to keep that dummy data in main memory). \nBut I do not see any difference. \n\nI thing the comparison is not right fair, if the caching in the main memory approach brings that big advantage.\n\nWhat can you as experts suggest me?\n\nCheers Markus\n\n\n****************************\nMy environment is:\n\nOS: linux ubuntu\n\nCPU dual Core\nmodel name : Intel(R) Xeon(R) CPU E7- 2850 @ 2.00GHz\nstepping : 1\ncpu MHz : 1997.386\ncache size : 24576 KB\n\nRAM: 5GB\n\npostgres settings: version 8.4\n\nshared_buffers = 650MB \nwork_mem = 512MB \nmaintenance_work_mem = 256MB\neffective_cache_size = 500MB\n \n-- \nPh D. Student Markus Innerebner\n\nDIS Research Group - Faculty of Computer Science\nFree University Bozen-Bolzano\n\nDominikanerplatz 3 - Room 211\nI - 39100 Bozen\nPhone: +39-0471-016143\nMobile: +39-333-9392929\n\n\ngpg --fingerprint\n-------------------------------------\npub 1024D/588F6308 2007-01-09\n Key fingerprint = 6948 947E CBD2 89FD E773 E863 914F EB1B 588F 6308\nsub 2048g/BF4877D0 2007-01-09\n\n\n",
"msg_date": "Thu, 30 Aug 2012 10:13:39 +0200",
"msg_from": "Markus Innerebner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about caching on full table scans"
},
{
"msg_contents": "Markus Innerebner wrote:\n> I am doing some runtime experiments in my implementation, which is\ncomputing multi-modal range queries\n> for a query point (if you want to know details check the website:\nwww.isochrones.inf.unibz.it).\n> The network is explored using Dijkstra Shortest Path algorithm that\nstarts from the query point and\n> starts to explore all connected vertices until the time is exceeded.\n> The network is stored on a postgres (postgis) database consisting of\nvertices and edges.\n> relation: edge(id int, source int, target int, length double, segment\ngeometry,..)\n> \n> I have different implementations how the network is loaded in main\nmemory:\n> approach 1: loads initially the entire edge table (full table scan) in\nmain memory and then starts to\n> expand the network and doing some computation.\n> approach 2: loads only the adjacent edges of the current expanded\nvertex\n> approach 3: loads junks using the euclidean distance upper bound\n> \n> I have different datasets: 6000 tuples (small), 4,000,000 tuples\n(large)\n> \n> I repeat each experiment at least 10 times.\n> When observing the runtime I realized following:\n> - in the first iteration approach 1 takes long time, and its runtime\nstarts to perform better after\n> each iteration:\n> e.g. with large dataset\n> \t- iteration 1: 60.0s\n> \t- iteration 2: 40.7s\n> \t- iteration 3: 40,s\n> \t- iteration 4: 39.7s\n> \t- iteration 5: 39.5s\n> \t- iteration 6: 39.3s\n> \t- iteration 7: 40.0s\n> \t- iteration 8: 34.8s\n> \t- iteration 9: 39.1s\n> \t- iteration 10: 38.0s\n> \n> In the other approaches I do not see that big difference.\n> \n> I know that postgres (and OS) is caching that dataset. But is there a\nway to force the database to\n> remove that values from the cache?\n> I also tried to perform after each iteration a scan on a dummy table\n(executing it at least 10 times\n> to force the optimized to keep that dummy data in main memory).\n> But I do not see any difference.\n> \n> I thing the comparison is not right fair, if the caching in the main\nmemory approach brings that big\n> advantage.\n> \n> What can you as experts suggest me?\n\nIn your approach 1 to 3, what do you mean with \"load into main memory\"?\nDo you\na) make sure that the data you talk about are in the PostgreSQL buffer\ncache\nor\nb) retrieve the data from PostgreSQL and store it somewhere in your\napplication?\n\nTo clear PostgreSQL's cache, restart the server.\nThat should be a fast operation.\nSince version 8.3, PostgreSQL is smart enough not to evict the\nwhole cache for a large sequential scan.\n\nTo flush the filesystem cache (from Linux 2.6.16 on), use\nsync; echo 3 > /proc/sys/vm/drop_caches\n\nYours,\nLaurenz Albe\n\n",
"msg_date": "Thu, 30 Aug 2012 11:31:24 +0200",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about caching on full table scans"
},
{
"msg_contents": "Hi Laurenz,\n\n> \n> In your approach 1 to 3, what do you mean with \"load into main memory\"?\n\n\nI forgot to say: I use Java and connect with JDBC.\n\nin approach 1 I do an initial loading of the entire relation, by executing 1 SQL query to load all edges in main memory, where I create my main memory structure\nas an adjacency list.\n\n> Do you\n> a) make sure that the data you talk about are in the PostgreSQL buffer\n> cache\n> or\n\n> b) retrieve the data from PostgreSQL and store it somewhere in your\n> application?\n\nIn approach 1 I do that, as described before.\n\nBut after each experiment I restart a new java process.\n\n\n> \n> To clear PostgreSQL's cache, restart the server.\n> That should be a fast operation.\n> Since version 8.3, PostgreSQL is smart enough not to evict the\n> whole cache for a large sequential scan.\n\n\n> \n> To flush the filesystem cache (from Linux 2.6.16 on), use\n> sync; echo 3 > /proc/sys/vm/drop_caches\n\nI started to do that , and \nyes, this solves my problem!!\n\nI assume that deleting file system cache implies that also postgres cache is deleted, isn't it ?\n\nso i will invoke after each experiment this command\n\nthanks a lot!!\n\nMarkus\nHi Laurenz,In your approach 1 to 3, what do you mean with \"load into main memory\"?I forgot to say: I use Java and connect with JDBC.in approach 1 I do an initial loading of the entire relation, by executing 1 SQL query to load all edges in main memory, where I create my main memory structureas an adjacency list.Do youa) make sure that the data you talk about are in the PostgreSQL buffercacheorb) retrieve the data from PostgreSQL and store it somewhere in yourapplication?In approach 1 I do that, as described before.But after each experiment I restart a new java process.To clear PostgreSQL's cache, restart the server.That should be a fast operation.Since version 8.3, PostgreSQL is smart enough not to evict thewhole cache for a large sequential scan.To flush the filesystem cache (from Linux 2.6.16 on), usesync; echo 3 > /proc/sys/vm/drop_cachesI started to do that , and yes, this solves my problem!!I assume that deleting file system cache implies that also postgres cache is deleted, isn't it ?so i will invoke after each experiment this commandthanks a lot!!Markus",
"msg_date": "Thu, 30 Aug 2012 19:34:56 +0200",
"msg_from": "Markus Innerebner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about caching on full table scans"
},
{
"msg_contents": "On Thu, Aug 30, 2012 at 10:34 AM, Markus Innerebner\n<[email protected]> wrote:\n>\n> > To flush the filesystem cache (from Linux 2.6.16 on), use\n> > sync; echo 3 > /proc/sys/vm/drop_caches\n>\n>\n> I started to do that , and\n> yes, this solves my problem!!\n>\n> I assume that deleting file system cache implies that also postgres cache is\n> deleted, isn't it ?\n\n\nNo, the postgres-managed cache is not cleared by doing that. In order\nto get rid of both layers of caching, you should restart the postgres\nserver and then do the drop_caches.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Thu, 30 Aug 2012 11:00:32 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about caching on full table scans"
},
{
"msg_contents": "On Thu, Aug 30, 2012 at 11:34 AM, Markus Innerebner\n<[email protected]> wrote:\n> Hi Laurenz,\n>\n>\n> In your approach 1 to 3, what do you mean with \"load into main memory\"?\n>\n>\n>\n> I forgot to say: I use Java and connect with JDBC.\n>\n> in approach 1 I do an initial loading of the entire relation, by executing 1\n> SQL query to load all edges in main memory, where I create my main memory\n> structure\n> as an adjacency list.\n>\n> Do you\n> a) make sure that the data you talk about are in the PostgreSQL buffer\n> cache\n> or\n>\n>\n> b) retrieve the data from PostgreSQL and store it somewhere in your\n> application?\n>\n>\n> In approach 1 I do that, as described before.\n>\n> But after each experiment I restart a new java process.\n>\n>\n>\n> To clear PostgreSQL's cache, restart the server.\n> That should be a fast operation.\n> Since version 8.3, PostgreSQL is smart enough not to evict the\n> whole cache for a large sequential scan.\n>\n>\n>\n>\n> To flush the filesystem cache (from Linux 2.6.16 on), use\n> sync; echo 3 > /proc/sys/vm/drop_caches\n>\n>\n> I started to do that , and\n> yes, this solves my problem!!\n>\n> I assume that deleting file system cache implies that also postgres cache is\n> deleted, isn't it ?\n\nNO. PostgreSQL maintains its own cache. To flush it you need to\nrestart postgresql. However, in a previous post you stated this:\n\n> I know that postgres (and OS) is caching that dataset. But is there a way to force the database\n> to remove that values from the cache?\n\nIt is NOT guaranteed that postgresql will be caching your data in a\nfull table scan. To keep from blowing out the shared buffers\npostgresql uses as a cache, it uses a ring buffer for sequential scans\nso it is quite likely that on a sequential scan postgresql is not\ncaching your data.\n\n",
"msg_date": "Thu, 30 Aug 2012 12:13:45 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about caching on full table scans"
},
{
"msg_contents": "thanks a lot for your feedback. \nIt helped me a lot and I have now a better overview in very specific hints, which I wasn't able to find in any documentation.\n\nCheers Markus\n\n\n",
"msg_date": "Fri, 31 Aug 2012 09:30:08 +0200",
"msg_from": "Markus Innerebner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about caching on full table scans"
}
] |
[
{
"msg_contents": "Hello,\n\nWe are doing some testing and have a very strange behaviour in the\nperformance obtained with postgres while executing a\nInsert/select/delete transaction.\n\nSoftware and Hardware details:\n\n- O.S = Red hat 6.2\n\n$ uname -a\nLinux localhost.localdomain 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9\n08:03:13 EST 2011 x86_64 x86_64 x86_64 GNU/Linux\n\n- RAM = 25 GB (resources are guaranteed)\n- 4CPU's\n- Machine is running in an ESX Vsphere\n-Postgresql version installed is : postgresql-9.1.3 although when\nquerying the database we retrieve this output.\n\npostgres=# select * from version();\n version\n-----------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.9 on x86_64-redhat-linux-gnu, compiled by GCC gcc\n(GCC) 4.4.5 20110214 (Red Hat 4.4.5-6), 64-bit\n(1 row)\npostgres=#\nWe are connecting to the database with 2 simple java programs.\n\nProgram 1: dbtransfromfile: this program creates a simple table\nconsisting of a one int column table. After the creation, the program\ninserts 1000 tuples in the table, which are never deleted, after that\nthe program reads a transaction pattern from a given file and executes\nit a number of times determined when the program is launched.\n\nThe transaction we are launching is (INSERT/SELECT/DELETE) the following:\n\ninsert into T_TEST values (1);select * from T_TEST where\nc1=1000;delete from T_TEST where c1=1;commit;\n\nProgram 2: dbtransperf: this program measures the number of new\ntransactions that have been commited since the dbtransperf program was\nlaunched.\nWe get the number of transactions done in the rdbms up to that moment\nin the target database by means of the following query:\n\nString sentencia = \"select now(), xact_commit from pg_stat_database\nwhere datid=\" +ps_oid;\n\nLater on, the program makes its own calculations to get de number of\ncommits per second.\n\nOur Test consists of:\n\nLaunching dbtransperf in order to start measuring performance\n(monitoring), and while running we concurrently launch the\ndbtransfromfile java program, which is the one which will execute the\ntransaction indicated in the file.\n\nFor instance for a concrete test of 50.000 transactions we obtain the\nfollowing results with the monitoring program (if you plot these\nresults into an Excelworksheet you'll see an exponetial decreasing\nbehaviour) :\n\nPostgreSQL\n438\n617\n490\n469\n420\n381\n363\n335\n311\n303\n285\n275\n260\n251\n251\n239\n227\n221\n221\n212\n207\n207\n200\n193\n189\n187\n183\n178\n176\n173\n167\n169\n165\n164\n159\n158\n154\n155\n154\n148\n149\n147\n141\n143\n141\n141\n137\n138\n134\n133\n133\n133\n130\n131\n125\n127\n126\n120\n125\n123\n124\n123\n118\n119\n118\n118\n118\n116\n112\n112\n112\n113\n110\n112\n111\n111\n109\n108\n108\n107\n108\n107\n105\n105\n107\n104\n103\n103\n102\n100\n102\n100\n100\n101\n98\n99\n97\n97\n97\n96\n96\n95\n94\n94\n94\n94\n93\n93\n92\n92\n92\n91\n92\n91\n69\n108\n87\n66\n88\n88\n88\n86\n86\n86\n84\n86\n86\n84\n83\n81\n84\n83\n83\n84\n81\n82\n82\n82\n80\n80\n80\n80\n80\n80\n81\n80\n79\n80\n80\n78\n78\n78\n78\n78\n78\n77\n78\n77\n77\n76\n74\n76\n76\n76\n75\n74\n74\n74\n74\n56\n74\n72\n74\n74\n75\n72\n71\n72\n72\n72\n72\n71\n71\n70\n70\n70\n70\n70\n70\n70\n70\n70\n70\n70\n67\n68\n68\n68\n68\n68\n68\n54\n\nWe have run another similar program running simple insert massive\ntransactions, also with simple massive select transactions, and simple\nmassive deletes as trasactions. The results for isolated type\ntransactions don't show this behaviour, in fact they are very stable\nand fast results, but when executing a compounded\nINSERT/SELECT/DELETE/COMMIT transaction, the results show this odd\nperformance behaviour, which we find unsatisfactory, we undestand this\nbehaviour shouldn't be a normal one.\n\nAre we missing something? Is the configuration incorrect? This is our\nconfig file:\n\n[postgsql@localhost data]$ cat postgresql.conf\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The \"=\" is optional.) Whitespace may be used. Comments are introduced with\n# \"#\" anywhere on a line. The complete list of parameter names and allowed\n# values can be found in the PostgreSQL documentation.\n#\n# The commented-out settings shown in this file represent the default values.\n# Re-commenting a setting is NOT sufficient to revert it to the default value;\n# you need to reload the server.\n#\n# This file is read on server startup and when the server receives a SIGHUP\n# signal. If you edit the file on a running system, you have to SIGHUP the\n# server for the changes to take effect, or use \"pg_ctl reload\". Some\n# parameters, which are marked below, require a server shutdown and restart to\n# take effect.\n#\n# Any parameter can also be given as a command-line option to the server, e.g.,\n# \"postgres -c log_connections=on\". Some parameters can be changed at run time\n# with the \"SET\" SQL command.\n#\n# Memory units: kB = kilobytes Time units: ms = milliseconds\n# MB = megabytes s = seconds\n# GB = gigabytes min = minutes\n# h = hours\n# d = days\n\n\n#------------------------------------------------------------------------------\n# FILE LOCATIONS\n#------------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command-line\n# option or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir' # use data in another directory\n # (change requires restart)\n#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file\n # (change requires restart)\n#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file\n # (change requires restart)\n\n# If external_pid_file is not explicitly set, no extra PID file is written.\n#external_pid_file = '(none)' # write an extra PID file\n # (change requires restart)\n\n\n#------------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#------------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = 'localhost' # what IP address(es) to listen on;\n # comma-separated list of addresses;\n # defaults to 'localhost', '*' = all\n # (change requires restart)\nport = 50008 # (change requires restart)\nmax_connections = 100 # (change requires restart)\n# Note: Increasing max_connections costs ~400 bytes of shared memory per\n# connection slot, plus lock space (see max_locks_per_transaction).\n#superuser_reserved_connections = 3 # (change requires restart)\n#unix_socket_directory = '' # (change requires restart)\n#unix_socket_group = '' # (change requires restart)\n#unix_socket_permissions = 0777 # begin with 0 to use octal notation\n # (change requires restart)\n#bonjour_name = '' # defaults to the computer name\n # (change requires restart)\n\n# - Security and Authentication -\n\n#authentication_timeout = 1min # 1s-600s\n#ssl = off # (change requires restart)\n#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers\n # (change requires restart)\n#ssl_renegotiation_limit = 512MB # amount of data between renegotiations\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos and GSSAPI\n#krb_server_keyfile = ''\n#krb_srvname = 'postgres' # (Kerberos only)\n#krb_caseins_users = off\n\n# - TCP Keepalives -\n# see \"man 7 tcp\" for details\n\n#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;\n # 0 selects the system default\n#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n # 0 selects the system default\n#tcp_keepalives_count = 0 # TCP_KEEPCNT;\n # 0 selects the system default\n\n\n#------------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#------------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 5120MB # min 128kB\n # (change requires restart)\n#temp_buffers = 8MB # min 800kB\n#max_prepared_transactions = 0 # zero disables the feature\n # (change requires restart)\n# Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n# It is not advisable to set max_prepared_transactions nonzero unless you\n# actively intend to use prepared transactions.\n#work_mem = 1MB # min 64kB\n#maintenance_work_mem = 16MB # min 1MB\n#max_stack_depth = 2MB # min 100kB\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n # (change requires restart)\n#shared_preload_libraries = '' # (change requires restart)\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0ms # 0-100 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 1-10000 credits\n\n# - Background Writer -\n\n#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round\n#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\nscanned/round\n\n# - Asynchronous Behavior -\n\n#effective_io_concurrency = 1 # 1-1000. 0 disables prefetching\n\n\n#------------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#------------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = on # turns forced synchronization on or off\n#synchronous_commit = on # immediate fsync at commit\n#wal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync (default on Linux)\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes\nwal_buffers = 16000kB # min 32kB\n # (change requires restart)\n#wal_writer_delay = 200ms # 1-10000 milliseconds\n\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 5min # range 30s-1h\n#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\n#checkpoint_warning = 30s # 0 disables\n\n# - Archiving -\n\n#archive_mode = off # allows archiving to be done\n # (change requires restart)\n#archive_command = '' # command to use to archive a logfile segment\n#archive_timeout = 0 # force a logfile segment switch after this\n # number of seconds; 0 disables\n\n\n#------------------------------------------------------------------------------\n# QUERY TUNING\n#------------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0 # measured on an arbitrary scale\n#random_page_cost = 4.0 # same scale as above\n#cpu_tuple_cost = 0.01 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\n#effective_cache_size = 128MB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 100 # range 1-10000\n#constraint_exclusion = partition # on, off, or partition\n#cursor_tuple_fraction = 0.1 # range 0.0-1.0\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\n # JOIN clauses\n\n\n#------------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#------------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr' # Valid values are combinations of\n # stderr, csvlog, syslog and eventlog,\n # depending on platform. csvlog\n # requires logging_collector to be on.\n\n# This is used when logging to stderr:\nlogging_collector = on # Enable capturing of stderr and csvlog\n # into log files. Required to be on for\n # csvlogs.\n # (change requires restart)\n\n# These are only used if logging_collector is on:\nlog_directory = 'pg_log' # directory where log files are written,\n # can be absolute or relative to PGDATA\nlog_filename = 'postgresql-%a.log' # log file name pattern,\n # can include strftime() escapes\nlog_truncate_on_rotation = on # If on, an existing log file of the\n # same name as the new log file will be\n # truncated rather than appended to.\n # But such truncation only occurs on\n # time-driven rotation, not on restarts\n # or size-driven rotation. Default is\n # off, meaning append to existing files\n # in all cases.\nlog_rotation_age = 1d # Automatic rotation of logfiles will\n # happen after that time. 0 disables.\nlog_rotation_size = 0 # Automatic rotation of logfiles will\n # happen after that much log output.\n # 0 disables.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n#silent_mode = off # Run server silently.\n # DO NOT USE without syslog or\n # logging_collector\n # (change requires restart)\n\n\n# - When to Log -\n\n#client_min_messages = notice # values in order of decreasing detail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # log\n # notice\n # warning\n # error\n\n#log_min_messages = warning # values in order of decreasing detail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # log\n # fatal\n # panic\n\n#log_error_verbosity = default # terse, default, or verbose messages\n\n#log_min_error_statement = error # values in order of decreasing detail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # log\n # fatal\n # panic (effectively off)\n\n#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements\n # and their durations, > 0 logs only\n # statements running at least\nthis number\n # of milliseconds\n\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = on\n#log_checkpoints = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\n#log_hostname = off\n#log_line_prefix = '' # special values:\n # %u = user name\n # %d = database name\n # %r = remote host and port\n # %h = remote host\n # %p = process ID\n # %t = timestamp without milliseconds\n # %m = timestamp with milliseconds\n # %i = command tag\n # %c = session ID\n # %l = session line number\n # %s = session start timestamp\n # %v = virtual transaction ID\n # %x = transaction ID (0 if none)\n # %q = stop here in non-session\n # processes\n # %% = '%'\n # e.g. '<%u%%%d> '\n#log_lock_waits = off # log lock waits >= deadlock_timeout\n#log_statement = 'none' # none, ddl, mod, all\n#log_temp_files = -1 # log temporary files equal or larger\n # than the specified size in kilobytes;\n # -1 disables, 0 logs all temp files\n#log_timezone = unknown # actually, defaults to TZ environment\n # setting\n\n\n#------------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#------------------------------------------------------------------------------\n\n# - Query/Index Statistics Collector -\n\n#track_activities = on\n#track_counts = on\n#track_functions = none # none, pl, all\n#track_activity_query_size = 1024\n#update_process_title = on\n#stats_temp_directory = 'pg_stat_tmp'\n\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n\n#------------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#------------------------------------------------------------------------------\n\n#autovacuum = on # Enable autovacuum subprocess? 'on'\n # requires track_counts to also be on.\n#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and\n # their durations, > 0 logs only\n # actions running at least this number\n # of milliseconds.\n#autovacuum_max_workers = 3 # max number of autovacuum subprocesses\n#autovacuum_naptime = 1min # time between autovacuum runs\n#autovacuum_vacuum_threshold = 50 # min number of row updates before\n # vacuum\n#autovacuum_analyze_threshold = 50 # min number of row updates before\n # analyze\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum\n#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze\n#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum\n # (change requires restart)\n#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for\n # autovacuum, in milliseconds;\n # -1 means use vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovacuum, -1 means use\n # vacuum_cost_limit\n\n\n#------------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#------------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '\"$user\",public' # schema names\n#default_tablespace = '' # a tablespace name, '' uses the default\n#temp_tablespaces = '' # a list of tablespace names, '' uses\n # only default tablespace\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#session_replication_role = 'origin'\n#statement_timeout = 0 # in milliseconds, 0 is disabled\n#vacuum_freeze_min_age = 50000000\n#vacuum_freeze_table_age = 150000000\n#xmlbinary = 'base64'\n#xmloption = 'content'\n\n# - Locale and Formatting -\n\ndatestyle = 'iso, mdy'\n#intervalstyle = 'postgres'\n#timezone = unknown # actually, defaults to TZ environment\n # setting\n#timezone_abbreviations = 'Default' # Select the set of available time zone\n # abbreviations. Currently, there are\n # Default\n # Australia\n # India\n # You can create your own file in\n # share/timezonesets/.\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database\n # encoding\n\n# These settings are initialized by initdb, but they can be changed.\nlc_messages = 'en_US.UTF-8' # locale for system\nerror message\n # strings\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\n\n# default configuration for text search\ndefault_text_search_config = 'pg_catalog.english'\n\n# - Other Defaults -\n\n#dynamic_library_path = '$libdir'\n#local_preload_libraries = ''\n\n\n#------------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#------------------------------------------------------------------------------\n\n#deadlock_timeout = 1s\n#max_locks_per_transaction = 64 # min 10\n # (change requires restart)\n# Note: Each lock table slot uses ~270 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n\n#------------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#------------------------------------------------------------------------------\n\n# - Previous PostgreSQL Versions -\n\n#add_missing_from = off\n#array_nulls = on\n#backslash_quote = safe_encoding # on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = on\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = on\n#standard_conforming_strings = off\n#synchronize_seqscans = on\n\n# - Other Platforms and Clients -\n\n#transform_null_equals = off\n\n\n#------------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#------------------------------------------------------------------------------\n\n#custom_variable_classes = '' # list of custom variable class names\n\n\nWe would be very thankful if you could help us solve this worrying issue.\n\nThanks in advance and regards,\n\ndbaneedshelp\n\n",
"msg_date": "Fri, 31 Aug 2012 14:27:24 +0200",
"msg_from": "John Nash <[email protected]>",
"msg_from_op": true,
"msg_subject": "exponential performance decrease in ISD transaction"
},
{
"msg_contents": "On 31.08.2012 15:27, John Nash wrote:\n> Program 1: dbtransfromfile: this program creates a simple table\n> consisting of a one int column table. After the creation, the program\n> inserts 1000 tuples in the table, which are never deleted, after that\n> the program reads a transaction pattern from a given file and executes\n> it a number of times determined when the program is launched.\n>\n> The transaction we are launching is (INSERT/SELECT/DELETE) the following:\n>\n> insert into T_TEST values (1);select * from T_TEST where\n> c1=1000;delete from T_TEST where c1=1;commit;\n\nSounds like the table keeps growing when rows are inserted and \nsubsequently deleted. PostgreSQL doesn't immediately remove deleted \ntuples from the underlying file, but simply marks them as deleted. The \nrows are not physically removed until autovacuum kicks in and cleans it \nup, or the table is vacuumed manually.\n\nI'd suggest creating an index on t_test(c1), if there isn't one already. \nIt's not helpful when the table is small, but when the table is bloated \nwith all the dead tuples from the deletions, it should help to keep the \naccess fast despite the bloat.\n\n- Heikki\n\n",
"msg_date": "Fri, 31 Aug 2012 16:36:16 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: exponential performance decrease in ISD transaction"
},
{
"msg_contents": "On Fri, Aug 31, 2012 at 5:27 AM, John Nash\n<[email protected]> wrote:\n\n> -Postgresql version installed is : postgresql-9.1.3 although when\n> querying the database we retrieve this output.\n>\n> postgres=# select * from version();\n> version\n> -----------------------------------------------------------------------------------------------------------------\n> PostgreSQL 8.4.9 on x86_64-redhat-linux-gnu, compiled by GCC gcc\n> (GCC) 4.4.5 20110214 (Red Hat 4.4.5-6), 64-bit\n\nHi John,\n\nYou have two versions of pgsql installed, and the one you are running\nis not the one you think you are running.\n\nThat's probably the first thing to sort out--repeat the experiment\nwith the correct version.\n\nAlso, rather than posting the entire config file, you can get just the\nnon-default settings:\n\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\n\nCheers,\n\nJeff\n\n",
"msg_date": "Fri, 31 Aug 2012 09:17:19 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: exponential performance decrease in ISD transaction"
},
{
"msg_contents": "Hi,\n\nWe can try installing version 8.4.9, but when downloading the\nsoftware, we installed the binaries and files given in postgresql web\npage named as version 9.1.3. This is the tar file downloaded:\n\npostgresql-9.1.3.tar\n\nContaining the following when un-tar-ed:\n\n[postgsql@localhost postgresql-9.1.3]$ pwd\n/postgresql/postgresql-9.1.3\n[postgsql@localhost postgresql-9.1.3]$ ll\ntotal 2528\n-rwxrwxrwx 1 postgsql gpostgre 385 Feb 23 2012 aclocal.m4\ndrwxrwxrwx 2 postgsql gpostgre 4096 Feb 24 2012 config\n-rwxrwxrwx 1 postgsql gpostgre 326754 Jun 4 10:30 config.log\n-rwxrwxrwx 1 postgsql gpostgre 37900 Jun 4 10:30 config.status\n-rwxrwxrwx 1 postgsql gpostgre 866562 Feb 23 2012 configure\n-rwxrwxrwx 1 postgsql gpostgre 63599 Feb 23 2012 configure.in\ndrwxrwxrwx 51 postgsql gpostgre 4096 Feb 24 2012 contrib\n-rwxrwxrwx 1 postgsql gpostgre 1192 Feb 23 2012 COPYRIGHT\ndrwxrwxrwx 3 postgsql gpostgre 4096 Feb 24 2012 doc\n-rw-r--r-- 1 postgsql gpostgre 3741 Jun 4 10:30 GNUmakefile\n-rwxrwxrwx 1 postgsql gpostgre 3741 Feb 23 2012 GNUmakefile.in\n-rwxrwxrwx 1 postgsql gpostgre 1165183 Feb 24 2012 HISTORY\n-rwxrwxrwx 1 postgsql gpostgre 76550 Feb 24 2012 INSTALL\n-rwxrwxrwx 1 postgsql gpostgre 1489 Feb 23 2012 Makefile\n-rwxrwxrwx 1 postgsql gpostgre 1284 Feb 23 2012 README\ndrwxrwxrwx 14 postgsql gpostgre 4096 Jun 4 10:30 src\n\n\n2012/8/31 Jeff Janes <[email protected]>:\n> On Fri, Aug 31, 2012 at 5:27 AM, John Nash\n> <[email protected]> wrote:\n>\n>> -Postgresql version installed is : postgresql-9.1.3 although when\n>> querying the database we retrieve this output.\n>>\n>> postgres=# select * from version();\n>> version\n>> -----------------------------------------------------------------------------------------------------------------\n>> PostgreSQL 8.4.9 on x86_64-redhat-linux-gnu, compiled by GCC gcc\n>> (GCC) 4.4.5 20110214 (Red Hat 4.4.5-6), 64-bit\n>\n> Hi John,\n>\n> You have two versions of pgsql installed, and the one you are running\n> is not the one you think you are running.\n>\n> That's probably the first thing to sort out--repeat the experiment\n> with the correct version.\n>\n> Also, rather than posting the entire config file, you can get just the\n> non-default settings:\n>\n> https://wiki.postgresql.org/wiki/Server_Configuration\n>\n>\n> Cheers,\n>\n> Jeff\n\n",
"msg_date": "Mon, 3 Sep 2012 13:22:10 +0200",
"msg_from": "John Nash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: exponential performance decrease in ISD transaction"
},
{
"msg_contents": "On Mon, Sep 3, 2012 at 4:22 AM, John Nash\n<[email protected]> wrote:\n> Hi,\n>\n> We can try installing version 8.4.9, but when downloading the\n> software, we installed the binaries and files given in postgresql web\n> page named as version 9.1.3. This is the tar file downloaded:\n>\n> postgresql-9.1.3.tar\n>\n> Containing the following when un-tar-ed:\n>\n> [postgsql@localhost postgresql-9.1.3]$ pwd\n> /postgresql/postgresql-9.1.3\n> [postgsql@localhost postgresql-9.1.3]$ ll\n> total 2528\n> -rwxrwxrwx 1 postgsql gpostgre 385 Feb 23 2012 aclocal.m4\n> drwxrwxrwx 2 postgsql gpostgre 4096 Feb 24 2012 config\n> -rwxrwxrwx 1 postgsql gpostgre 326754 Jun 4 10:30 config.log\n> -rwxrwxrwx 1 postgsql gpostgre 37900 Jun 4 10:30 config.status\n> -rwxrwxrwx 1 postgsql gpostgre 866562 Feb 23 2012 configure\n> -rwxrwxrwx 1 postgsql gpostgre 63599 Feb 23 2012 configure.in\n> drwxrwxrwx 51 postgsql gpostgre 4096 Feb 24 2012 contrib\n> -rwxrwxrwx 1 postgsql gpostgre 1192 Feb 23 2012 COPYRIGHT\n> drwxrwxrwx 3 postgsql gpostgre 4096 Feb 24 2012 doc\n> -rw-r--r-- 1 postgsql gpostgre 3741 Jun 4 10:30 GNUmakefile\n> -rwxrwxrwx 1 postgsql gpostgre 3741 Feb 23 2012 GNUmakefile.in\n> -rwxrwxrwx 1 postgsql gpostgre 1165183 Feb 24 2012 HISTORY\n> -rwxrwxrwx 1 postgsql gpostgre 76550 Feb 24 2012 INSTALL\n> -rwxrwxrwx 1 postgsql gpostgre 1489 Feb 23 2012 Makefile\n> -rwxrwxrwx 1 postgsql gpostgre 1284 Feb 23 2012 README\n> drwxrwxrwx 14 postgsql gpostgre 4096 Jun 4 10:30 src\n\nHi John,\n\ndownloading and untarring is not enough, you have to configure,\ncompile, and install it as well. (Which you may have done)\n\nIn any case, it is perfectly possible to have multiple versions\ninstalled simultaneously. If I had to guess, I would say that 8.4.9\ncame already installed with your OS, and you accidentally started up\nthat preinstalled version instead of the one you intended.\n\nYou can often find the absolute path to the binary that is actually\nrunning by doing:\n\nps -efl|fgrep /postg\n\nAnd then make sure that that is the one you think it is.\n\nYou can also look in the file \"PG_VERSION\" in the data directory.\n\nIn any case, the behavior you report is exactly would would be\nexpected if autovacuum is not running. The config file you posted\nshows autovac is turned on, but I suspect that is not the config file\nactually being used by the running server.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Mon, 3 Sep 2012 10:27:23 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: exponential performance decrease in ISD transaction"
},
{
"msg_contents": "On 09/03/2012 01:27 PM, Jeff Janes wrote:\n> In any case, the behavior you report is exactly would would be\n> expected if autovacuum is not running. The config file you posted\n> shows autovac is turned on, but I suspect that is not the config file\n> actually being used by the running server.\n\nIt's also important to note that:\n\n1) autovacuum doesn't kick in until a moderate number of changes have \nbeen made. Having it turned on doesn't mean it runs continuously. The \ntable can accumulate a lot of dead junk before autovacuum decides to \nclean things up.\n\n2) When autovacuum *does* start, that can be a source of slowdowns itself.\n\nI suspect that some level of table cleanup issue is here. I would also \nbet that the performance seen initially is inflated because Linux's \nwrite cache is absorbing writes at the beginning. The first few hundred \nmegabytes or possibly more you write to the database don't wait for \nphysical I/O at all. Once that cache fills, though, performance drops \nhard. Most benchmarks like this will start out really fast, then drop \noff dramatically once the write cache is full, and real-world disk \nperformance limits progress.\n\nIn those cases, the slower performance after things have been running a \nwhile is actually the real sustainable speed of the server. The much \nfaster ones may only be possible when the write cache is relatively \nempty, which makes them representative more of burst performance.\n\nA look at the \"Dirty:\" line in /proc/meminfo as the test runs will give \nyou an idea if write cache filling is actually an issue here. If that \nnumber just keeps going up and speeds keep on dropping, that's at least \none cause here. This could easily be both that and an autovacuum \nrelated too though.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 05 Sep 2012 23:55:21 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: exponential performance decrease in ISD transaction"
},
{
"msg_contents": "Hi,\n\nWe have investigated further and have observed the following:\n\nWe have another host with postgres installed in another IP. Called host 190.\n\nThe host we have reported to have the issue is host174\n\nWe have observed that if we launch the java program from host 190\ntowards host 174 through the network this is:\n\njdbc:postgresql://host174:50008/sessions\n\nPerformance is stable, whereas if we launch the same java code from\nhost174 itself to it's own database, performance is an exponential\ndecrease function.\n\nBoth databases are updated to version 9.1.3, and also have checked\nwith the same driver in both hosts.\n\nIn conclusion the odd behaviour just happens in host174, when java is\nlaunched from localhost.\n\nIf java program is launched from 190 to 190 it also presents stable\nperformance results.\n\nAutovacuum is configured. Any way here is the config of the\nproblematic database (in host 174) which is the same as the one in\n190.\n\nname |\ncurrent_setting\n\n--------------------------+--------------------------------------------------------------------------------------------\n------------------\n version | PostgreSQL 9.1.3 on\nx86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red\nHat\n 4.4.6-3), 64-bit\n archive_mode | off\n client_encoding | UTF8\n fsync | on\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_directory | pg_log\n log_filename | postgresql-%a.log\n log_rotation_age | 1d\n log_rotation_size | 0\n log_truncate_on_rotation | on\n logging_collector | on\n max_connections | 100\n max_stack_depth | 2MB\n port | 50008\n server_encoding | UTF8\n shared_buffers | 32MB\n synchronous_commit | on\n TimeZone | Europe/Madrid\n wal_buffers | 64kB\n wal_sync_method | fsync\n(22 rows)\n\nWe have enclosed a doc file including excel graphics to illustrate the\ntests done.\n\nWe don't understand why the postgres database in host174 just presents\nthis behaviour when java is launched locally. Please can you help us?\nHave dirty pages results some influence in this?\n\nThanks and regards,\n\nJohn\n\n2012/9/6 Greg Smith <[email protected]>:\n> On 09/03/2012 01:27 PM, Jeff Janes wrote:\n>>\n>> In any case, the behavior you report is exactly would would be\n>> expected if autovacuum is not running. The config file you posted\n>> shows autovac is turned on, but I suspect that is not the config file\n>> actually being used by the running server.\n>\n>\n> It's also important to note that:\n>\n> 1) autovacuum doesn't kick in until a moderate number of changes have been\n> made. Having it turned on doesn't mean it runs continuously. The table can\n> accumulate a lot of dead junk before autovacuum decides to clean things up.\n>\n> 2) When autovacuum *does* start, that can be a source of slowdowns itself.\n>\n> I suspect that some level of table cleanup issue is here. I would also bet\n> that the performance seen initially is inflated because Linux's write cache\n> is absorbing writes at the beginning. The first few hundred megabytes or\n> possibly more you write to the database don't wait for physical I/O at all.\n> Once that cache fills, though, performance drops hard. Most benchmarks like\n> this will start out really fast, then drop off dramatically once the write\n> cache is full, and real-world disk performance limits progress.\n>\n> In those cases, the slower performance after things have been running a\n> while is actually the real sustainable speed of the server. The much faster\n> ones may only be possible when the write cache is relatively empty, which\n> makes them representative more of burst performance.\n>\n> A look at the \"Dirty:\" line in /proc/meminfo as the test runs will give you\n> an idea if write cache filling is actually an issue here. If that number\n> just keeps going up and speeds keep on dropping, that's at least one cause\n> here. This could easily be both that and an autovacuum related too though.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 7 Sep 2012 12:55:59 +0200",
"msg_from": "John Nash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: exponential performance decrease in ISD transaction"
},
{
"msg_contents": "Sorry I forgot to attach the mentioned file with performance results.\nPlease find it enclosed now.\n\nRegards\n\n2012/9/7 John Nash <[email protected]>:\n> Hi,\n>\n> We have investigated further and have observed the following:\n>\n> We have another host with postgres installed in another IP. Called host 190.\n>\n> The host we have reported to have the issue is host174\n>\n> We have observed that if we launch the java program from host 190\n> towards host 174 through the network this is:\n>\n> jdbc:postgresql://host174:50008/sessions\n>\n> Performance is stable, whereas if we launch the same java code from\n> host174 itself to it's own database, performance is an exponential\n> decrease function.\n>\n> Both databases are updated to version 9.1.3, and also have checked\n> with the same driver in both hosts.\n>\n> In conclusion the odd behaviour just happens in host174, when java is\n> launched from localhost.\n>\n> If java program is launched from 190 to 190 it also presents stable\n> performance results.\n>\n> Autovacuum is configured. Any way here is the config of the\n> problematic database (in host 174) which is the same as the one in\n> 190.\n>\n> name |\n> current_setting\n>\n> --------------------------+--------------------------------------------------------------------------------------------\n> ------------------\n> version | PostgreSQL 9.1.3 on\n> x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red\n> Hat\n> 4.4.6-3), 64-bit\n> archive_mode | off\n> client_encoding | UTF8\n> fsync | on\n> lc_collate | en_US.UTF-8\n> lc_ctype | en_US.UTF-8\n> listen_addresses | *\n> log_directory | pg_log\n> log_filename | postgresql-%a.log\n> log_rotation_age | 1d\n> log_rotation_size | 0\n> log_truncate_on_rotation | on\n> logging_collector | on\n> max_connections | 100\n> max_stack_depth | 2MB\n> port | 50008\n> server_encoding | UTF8\n> shared_buffers | 32MB\n> synchronous_commit | on\n> TimeZone | Europe/Madrid\n> wal_buffers | 64kB\n> wal_sync_method | fsync\n> (22 rows)\n>\n> We have enclosed a doc file including excel graphics to illustrate the\n> tests done.\n>\n> We don't understand why the postgres database in host174 just presents\n> this behaviour when java is launched locally. Please can you help us?\n> Have dirty pages results some influence in this?\n>\n> Thanks and regards,\n>\n> John\n>\n> 2012/9/6 Greg Smith <[email protected]>:\n>> On 09/03/2012 01:27 PM, Jeff Janes wrote:\n>>>\n>>> In any case, the behavior you report is exactly would would be\n>>> expected if autovacuum is not running. The config file you posted\n>>> shows autovac is turned on, but I suspect that is not the config file\n>>> actually being used by the running server.\n>>\n>>\n>> It's also important to note that:\n>>\n>> 1) autovacuum doesn't kick in until a moderate number of changes have been\n>> made. Having it turned on doesn't mean it runs continuously. The table can\n>> accumulate a lot of dead junk before autovacuum decides to clean things up.\n>>\n>> 2) When autovacuum *does* start, that can be a source of slowdowns itself.\n>>\n>> I suspect that some level of table cleanup issue is here. I would also bet\n>> that the performance seen initially is inflated because Linux's write cache\n>> is absorbing writes at the beginning. The first few hundred megabytes or\n>> possibly more you write to the database don't wait for physical I/O at all.\n>> Once that cache fills, though, performance drops hard. Most benchmarks like\n>> this will start out really fast, then drop off dramatically once the write\n>> cache is full, and real-world disk performance limits progress.\n>>\n>> In those cases, the slower performance after things have been running a\n>> while is actually the real sustainable speed of the server. The much faster\n>> ones may only be possible when the write cache is relatively empty, which\n>> makes them representative more of burst performance.\n>>\n>> A look at the \"Dirty:\" line in /proc/meminfo as the test runs will give you\n>> an idea if write cache filling is actually an issue here. If that number\n>> just keeps going up and speeds keep on dropping, that's at least one cause\n>> here. This could easily be both that and an autovacuum related too though.\n>>\n>> --\n>> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n>> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 7 Sep 2012 12:57:23 +0200",
"msg_from": "John Nash <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: exponential performance decrease in ISD transaction"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm having performance issues with a simple table containing 'Nodes'\n(points) from OpenStreetMap:\n\n CREATE TABLE nodes (\n id bigint PRIMARY KEY,\n user_name text NOT NULL,\n tstamp timestamp without time zone NOT NULL,\n geom GEOMETRY(POINT, 4326)\n );\n CREATE INDEX idx_nodes_geom ON nodes USING gist (geom);\n\nThe number of rows grows steadily and soon reaches one billion\n(1'000'000'000), therefore the bigint id.\nNow, hourly inserts (update and deletes) are slowing down the database\n(PostgreSQL 9.1) constantly.\nBefore I'm looking at non-durable settings [1] I'd like to know what\nchoices I have to tune it while keeping the database productive:\ncluster index? partition table? use tablespaces? reduce physical block size?\n\nStefan\n\n[1] http://www.postgresql.org/docs/9.1/static/non-durability.html\n\n",
"msg_date": "Mon, 3 Sep 2012 13:03:48 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inserts in 'big' table slowing down the database"
},
{
"msg_contents": "On 03/09/2012 13:03, Stefan Keller wrote:\n> Hi,\n> \n> I'm having performance issues with a simple table containing 'Nodes'\n> (points) from OpenStreetMap:\n> \n> CREATE TABLE nodes (\n> id bigint PRIMARY KEY,\n> user_name text NOT NULL,\n> tstamp timestamp without time zone NOT NULL,\n> geom GEOMETRY(POINT, 4326)\n> );\n> CREATE INDEX idx_nodes_geom ON nodes USING gist (geom);\n> \n> The number of rows grows steadily and soon reaches one billion\n> (1'000'000'000), therefore the bigint id.\n> Now, hourly inserts (update and deletes) are slowing down the database\n> (PostgreSQL 9.1) constantly.\n> Before I'm looking at non-durable settings [1] I'd like to know what\n> choices I have to tune it while keeping the database productive:\n> cluster index? partition table? use tablespaces? reduce physical block size?\n\nYou need to describe in detail what does \"slowing down\" mean in your\ncase. Do the disk drives somehow do more operations per transaction?\nDoes the database use more CPU cycles? Is there swapping? What is the\nexpected (previous) performance?\n\nAt a guess, it is very unlikely that using non-durable settings will\nhelp you here.",
"msg_date": "Mon, 03 Sep 2012 13:21:36 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts in 'big' table slowing down the database"
},
{
"msg_contents": "Sorry for the delay. I had to sort out the problem (among other things).\n\nIt's mainly about swapping.\n\nThe table nodes contains about 2^31 entries and occupies about 80GB on\ndisk space plus index.\nIf one would store the geom values in a big array (where id is the\narray index) it would only make up about 16GB, which means that the\nids are dense (with few deletes).\nThen updates come in every hour as bulk insert statements with entries\nhaving ids in sorted manner.\nNow PG becomes slower and slower!\nCLUSTER could help - but obviously this operation needs a table lock.\nAnd if this operation takes longer than an hour, it delays the next\nupdate.\n\nAny ideas? Partitioning?\n\nYours, S.\n\n2012/9/3 Ivan Voras <[email protected]>:\n> On 03/09/2012 13:03, Stefan Keller wrote:\n>> Hi,\n>>\n>> I'm having performance issues with a simple table containing 'Nodes'\n>> (points) from OpenStreetMap:\n>>\n>> CREATE TABLE nodes (\n>> id bigint PRIMARY KEY,\n>> user_name text NOT NULL,\n>> tstamp timestamp without time zone NOT NULL,\n>> geom GEOMETRY(POINT, 4326)\n>> );\n>> CREATE INDEX idx_nodes_geom ON nodes USING gist (geom);\n>>\n>> The number of rows grows steadily and soon reaches one billion\n>> (1'000'000'000), therefore the bigint id.\n>> Now, hourly inserts (update and deletes) are slowing down the database\n>> (PostgreSQL 9.1) constantly.\n>> Before I'm looking at non-durable settings [1] I'd like to know what\n>> choices I have to tune it while keeping the database productive:\n>> cluster index? partition table? use tablespaces? reduce physical block size?\n>\n> You need to describe in detail what does \"slowing down\" mean in your\n> case. Do the disk drives somehow do more operations per transaction?\n> Does the database use more CPU cycles? Is there swapping? What is the\n> expected (previous) performance?\n>\n> At a guess, it is very unlikely that using non-durable settings will\n> help you here.\n>\n\n",
"msg_date": "Tue, 2 Oct 2012 02:15:27 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts in 'big' table slowing down the database"
},
{
"msg_contents": "Stefan --\n\n\n----- Original Message -----\n> From: Stefan Keller <[email protected]>\n> To: Ivan Voras <[email protected]>\n> Cc: [email protected]\n> Sent: Monday, October 1, 2012 5:15 PM\n> Subject: Re: [PERFORM] Inserts in 'big' table slowing down the database\n> \n> Sorry for the delay. I had to sort out the problem (among other things).\n> \n> It's mainly about swapping.\n> \n> The table nodes contains about 2^31 entries and occupies about 80GB on\n> disk space plus index.\n> If one would store the geom values in a big array (where id is the\n> array index) it would only make up about 16GB, which means that the\n> ids are dense (with few deletes).\n> Then updates come in every hour as bulk insert statements with entries\n> having ids in sorted manner.\n> Now PG becomes slower and slower!\n> CLUSTER could help - but obviously this operation needs a table lock.\n> And if this operation takes longer than an hour, it delays the next\n> update.\n> \n> Any ideas? Partitioning?\n\n\npg_reorg if you have the space might be useful in doing a cluster-like action:\n <http://reorg.projects.postgresql.org/>\n\nHaven't followed the thread so I hope this isn't redundant.\n\nPartitioning might work if you can create clusters that are bigger than 1 hour -- too many partitions doesn't help.\n\nGreg Williamson\n\n\n",
"msg_date": "Mon, 1 Oct 2012 17:35:58 -0700 (PDT)",
"msg_from": "Greg Williamson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts in 'big' table slowing down the database"
},
{
"msg_contents": "On 10/01/2012 07:15 PM, Stefan Keller wrote:\n\n> Any ideas? Partitioning?\n\nYes. Make sure you have a good column to partition on. Tables this large \nare just bad performers in general, and heaven forbid you ever have to \nperform maintenance on them. We had a table that size, and simply \ncreating an index could take upwards of two hours.\n\nIf you can't archive any of the table contents, partitioning may be your \nonly solution. If you have an EDB 9.1, you'll also have less problems \nwith the legacy issues people had with planning queries using partitions.\n\nDon't go crazy, though. I try to keep it under a couple dozen partitions \nper table, or under 100M records per partition.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n",
"msg_date": "Tue, 2 Oct 2012 10:09:10 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts in 'big' table slowing down the database"
},
{
"msg_contents": "On Mon, Oct 1, 2012 at 5:15 PM, Stefan Keller <[email protected]> wrote:\n> Sorry for the delay. I had to sort out the problem (among other things).\n>\n> It's mainly about swapping.\n\nDo you mean ordinary file IO? Or swapping of an actual process's\nvirtual memory? The latter shouldn't happen much unless you have\nsomething mis-configured.\n\n>\n> The table nodes contains about 2^31 entries and occupies about 80GB on\n> disk space plus index.\n\nHow big is each index?\n\nIf you reset the stats just before the bulk load, what do select *\nfrom pg_statio_user_tables and select * from pg_statio_user_indexes\nshow after the bulk load? What does vmstat show during the load?\n\n> If one would store the geom values in a big array (where id is the\n> array index) it would only make up about 16GB, which means that the\n> ids are dense (with few deletes).\n> Then updates come in every hour as bulk insert statements with entries\n> having ids in sorted manner.\n\nIs the problem that these operations themselves are too slow, or that\nthey slow down other operations when they are active? If the main\nproblem is that it slows down other operations, what are they?\n\nIf the problem is the length of the bulk operations themselves, what\nhappens if you split them up into chunks and run them in parallel?\n\nDo you have a test/dev/QA server? How long does a bulk insert take\nunder the four conditions of both indexes (PK and geometry), neither\nindex, just one, or just the other?\n\n> Now PG becomes slower and slower!\n> CLUSTER could help - but obviously this operation needs a table lock.\n> And if this operation takes longer than an hour, it delays the next\n> update.\n\nI don't see why a CLUSTER would help. Your table is probably already\nclustered well on the serial column. Clustering it instead on the\ngeometry probably wouldn't accomplish much. One thing that might help\nwould be to stuff the data to be inserted into a scratch table, index\nthat on the geometry, cluster that scratch table, and then do the\ninsert to the main table from the scratch table. That might result\nin the geom being inserted in a more cache-friendly order.\n\n> Any ideas? Partitioning?\n\nDo most selects against this table specify user_name as well as a\ngeometry query? If so, that might be a good partitioning key.\nOtherwise, I don't see what you could partition on in a way that make\nthings better.\n\nCheers,\n\nJeff\n\n",
"msg_date": "Wed, 3 Oct 2012 12:35:42 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts in 'big' table slowing down the database"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.